I0424 12:55:43.828706 6 e2e.go:243] Starting e2e run "cf7469f2-0c4a-44d6-b0e9-909d5672bbe4" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587732942 - Will randomize all specs Will run 215 of 4412 specs Apr 24 12:55:44.025: INFO: >>> kubeConfig: /root/.kube/config Apr 24 12:55:44.030: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 24 12:55:44.057: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 24 12:55:44.090: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 24 12:55:44.090: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 24 12:55:44.090: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 24 12:55:44.101: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 24 12:55:44.101: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 24 12:55:44.101: INFO: e2e test version: v1.15.11 Apr 24 12:55:44.102: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:55:44.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Apr 24 12:55:44.173: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-096eb410-df35-483a-be1f-21b55f0c7bf5 STEP: Creating a pod to test consume configMaps Apr 24 12:55:44.193: INFO: Waiting up to 5m0s for pod "pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72" in namespace "configmap-6597" to be "success or failure" Apr 24 12:55:44.210: INFO: Pod "pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72": Phase="Pending", Reason="", readiness=false. Elapsed: 17.565552ms Apr 24 12:55:46.236: INFO: Pod "pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042947833s Apr 24 12:55:48.240: INFO: Pod "pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047395914s STEP: Saw pod success Apr 24 12:55:48.240: INFO: Pod "pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72" satisfied condition "success or failure" Apr 24 12:55:48.243: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72 container configmap-volume-test: STEP: delete the pod Apr 24 12:55:48.279: INFO: Waiting for pod pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72 to disappear Apr 24 12:55:48.294: INFO: Pod pod-configmaps-6bbee4a6-500d-4c5f-8336-77c8b878de72 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:55:48.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6597" for this suite. Apr 24 12:55:54.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:55:54.418: INFO: namespace configmap-6597 deletion completed in 6.103301503s • [SLOW TEST:10.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:55:54.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 24 12:55:54.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7340 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 24 12:55:59.953: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0424 12:55:59.882719 32 log.go:172] (0xc000aa00b0) (0xc0007ca3c0) Create stream\nI0424 12:55:59.882773 32 log.go:172] (0xc000aa00b0) (0xc0007ca3c0) Stream added, broadcasting: 1\nI0424 12:55:59.885780 32 log.go:172] (0xc000aa00b0) Reply frame received for 1\nI0424 12:55:59.885813 32 log.go:172] (0xc000aa00b0) (0xc000ae2be0) Create stream\nI0424 12:55:59.885822 32 log.go:172] (0xc000aa00b0) (0xc000ae2be0) Stream added, broadcasting: 3\nI0424 12:55:59.886902 32 log.go:172] (0xc000aa00b0) Reply frame received for 3\nI0424 12:55:59.886941 32 log.go:172] (0xc000aa00b0) (0xc000ae2c80) Create stream\nI0424 12:55:59.886957 32 log.go:172] (0xc000aa00b0) (0xc000ae2c80) Stream added, broadcasting: 5\nI0424 12:55:59.887860 32 log.go:172] (0xc000aa00b0) Reply frame received for 5\nI0424 12:55:59.887887 32 log.go:172] (0xc000aa00b0) (0xc000ae2d20) Create stream\nI0424 12:55:59.887895 32 log.go:172] (0xc000aa00b0) (0xc000ae2d20) Stream added, broadcasting: 7\nI0424 12:55:59.888892 32 log.go:172] (0xc000aa00b0) Reply frame received for 7\nI0424 12:55:59.889044 32 log.go:172] (0xc000ae2be0) (3) Writing data frame\nI0424 12:55:59.889327 32 log.go:172] (0xc000ae2be0) (3) Writing data frame\nI0424 12:55:59.890378 32 log.go:172] (0xc000aa00b0) Data frame received for 5\nI0424 12:55:59.890657 32 log.go:172] (0xc000ae2c80) (5) Data frame handling\nI0424 12:55:59.890688 32 log.go:172] (0xc000ae2c80) (5) Data frame sent\nI0424 12:55:59.891146 32 log.go:172] (0xc000aa00b0) Data frame received for 5\nI0424 12:55:59.891177 32 log.go:172] (0xc000ae2c80) (5) Data frame handling\nI0424 12:55:59.891218 32 log.go:172] (0xc000ae2c80) (5) Data frame sent\nI0424 12:55:59.930835 32 log.go:172] (0xc000aa00b0) Data frame received for 7\nI0424 12:55:59.930885 32 log.go:172] (0xc000aa00b0) Data frame received for 5\nI0424 12:55:59.930903 32 log.go:172] (0xc000ae2c80) (5) Data frame handling\nI0424 12:55:59.930958 32 log.go:172] (0xc000ae2d20) (7) Data frame handling\nI0424 12:55:59.931468 32 log.go:172] (0xc000aa00b0) Data frame received for 1\nI0424 12:55:59.931519 32 log.go:172] (0xc0007ca3c0) (1) Data frame handling\nI0424 12:55:59.931545 32 log.go:172] (0xc0007ca3c0) (1) Data frame sent\nI0424 12:55:59.931575 32 log.go:172] (0xc000aa00b0) (0xc000ae2be0) Stream removed, broadcasting: 3\nI0424 12:55:59.931641 32 log.go:172] (0xc000aa00b0) (0xc0007ca3c0) Stream removed, broadcasting: 1\nI0424 12:55:59.931731 32 log.go:172] (0xc000aa00b0) Go away received\nI0424 12:55:59.931769 32 log.go:172] (0xc000aa00b0) (0xc0007ca3c0) Stream removed, broadcasting: 1\nI0424 12:55:59.931785 32 log.go:172] (0xc000aa00b0) (0xc000ae2be0) Stream removed, broadcasting: 3\nI0424 12:55:59.931794 32 log.go:172] (0xc000aa00b0) (0xc000ae2c80) Stream removed, broadcasting: 5\nI0424 12:55:59.931804 32 log.go:172] (0xc000aa00b0) (0xc000ae2d20) Stream removed, broadcasting: 7\n" Apr 24 12:55:59.953: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:56:01.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7340" for this suite. Apr 24 12:56:14.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:56:14.106: INFO: namespace kubectl-7340 deletion completed in 12.14328793s • [SLOW TEST:19.688 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:56:14.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 24 12:56:14.164: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173685,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 12:56:14.165: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173685,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 24 12:56:24.173: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173707,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 24 12:56:24.173: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173707,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 24 12:56:34.182: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173727,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 12:56:34.182: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173727,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 24 12:56:44.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173747,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 12:56:44.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-a,UID:ee9a6b02-179f-4687-bfe2-dbc0ff2932b3,ResourceVersion:7173747,Generation:0,CreationTimestamp:2020-04-24 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 24 12:56:54.199: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-b,UID:a626e3a8-df30-42ec-92f7-1522aadb3829,ResourceVersion:7173767,Generation:0,CreationTimestamp:2020-04-24 12:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 12:56:54.199: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-b,UID:a626e3a8-df30-42ec-92f7-1522aadb3829,ResourceVersion:7173767,Generation:0,CreationTimestamp:2020-04-24 12:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 24 12:57:04.206: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-b,UID:a626e3a8-df30-42ec-92f7-1522aadb3829,ResourceVersion:7173788,Generation:0,CreationTimestamp:2020-04-24 12:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 12:57:04.207: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-281,SelfLink:/api/v1/namespaces/watch-281/configmaps/e2e-watch-test-configmap-b,UID:a626e3a8-df30-42ec-92f7-1522aadb3829,ResourceVersion:7173788,Generation:0,CreationTimestamp:2020-04-24 12:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:57:14.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-281" for this suite. Apr 24 12:57:20.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:57:20.321: INFO: namespace watch-281 deletion completed in 6.109103756s • [SLOW TEST:66.215 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:57:20.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 24 12:57:20.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 24 12:57:20.447: INFO: stderr: "" Apr 24 12:57:20.447: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:57:20.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6058" for this suite. Apr 24 12:57:26.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:57:26.548: INFO: namespace kubectl-6058 deletion completed in 6.094722365s • [SLOW TEST:6.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:57:26.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 24 12:57:26.668: INFO: Waiting up to 5m0s for pod "downward-api-7a491493-2d10-4feb-add7-2dc7e6907603" in namespace "downward-api-2267" to be "success or failure" Apr 24 12:57:26.670: INFO: Pod "downward-api-7a491493-2d10-4feb-add7-2dc7e6907603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312849ms Apr 24 12:57:28.675: INFO: Pod "downward-api-7a491493-2d10-4feb-add7-2dc7e6907603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00658127s Apr 24 12:57:30.679: INFO: Pod "downward-api-7a491493-2d10-4feb-add7-2dc7e6907603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010948914s STEP: Saw pod success Apr 24 12:57:30.679: INFO: Pod "downward-api-7a491493-2d10-4feb-add7-2dc7e6907603" satisfied condition "success or failure" Apr 24 12:57:30.683: INFO: Trying to get logs from node iruya-worker2 pod downward-api-7a491493-2d10-4feb-add7-2dc7e6907603 container dapi-container: STEP: delete the pod Apr 24 12:57:30.736: INFO: Waiting for pod downward-api-7a491493-2d10-4feb-add7-2dc7e6907603 to disappear Apr 24 12:57:30.743: INFO: Pod downward-api-7a491493-2d10-4feb-add7-2dc7e6907603 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:57:30.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2267" for this suite. Apr 24 12:57:36.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:57:36.848: INFO: namespace downward-api-2267 deletion completed in 6.101147416s • [SLOW TEST:10.299 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:57:36.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cbea225a-fece-4d03-8e34-978ebb346b71 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cbea225a-fece-4d03-8e34-978ebb346b71 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:57:44.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4672" for this suite. Apr 24 12:58:07.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:58:07.079: INFO: namespace projected-4672 deletion completed in 22.090177419s • [SLOW TEST:30.231 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:58:07.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 24 12:58:07.137: INFO: Waiting up to 5m0s for pod "downward-api-326c3cb9-916a-4b92-affc-8d97c5801263" in namespace "downward-api-5682" to be "success or failure" Apr 24 12:58:07.178: INFO: Pod "downward-api-326c3cb9-916a-4b92-affc-8d97c5801263": Phase="Pending", Reason="", readiness=false. Elapsed: 40.731271ms Apr 24 12:58:09.182: INFO: Pod "downward-api-326c3cb9-916a-4b92-affc-8d97c5801263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044681718s Apr 24 12:58:11.186: INFO: Pod "downward-api-326c3cb9-916a-4b92-affc-8d97c5801263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04912744s STEP: Saw pod success Apr 24 12:58:11.186: INFO: Pod "downward-api-326c3cb9-916a-4b92-affc-8d97c5801263" satisfied condition "success or failure" Apr 24 12:58:11.190: INFO: Trying to get logs from node iruya-worker2 pod downward-api-326c3cb9-916a-4b92-affc-8d97c5801263 container dapi-container: STEP: delete the pod Apr 24 12:58:11.212: INFO: Waiting for pod downward-api-326c3cb9-916a-4b92-affc-8d97c5801263 to disappear Apr 24 12:58:11.237: INFO: Pod downward-api-326c3cb9-916a-4b92-affc-8d97c5801263 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:58:11.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5682" for this suite. Apr 24 12:58:17.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:58:17.336: INFO: namespace downward-api-5682 deletion completed in 6.095694295s • [SLOW TEST:10.257 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:58:17.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0424 12:58:47.930258 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 12:58:47.930: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:58:47.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-386" for this suite. Apr 24 12:58:53.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:58:54.021: INFO: namespace gc-386 deletion completed in 6.087855732s • [SLOW TEST:36.686 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:58:54.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-8804352b-a620-44ac-b6b7-611b484347b1 STEP: Creating a pod to test consume configMaps Apr 24 12:58:54.257: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d" in namespace "projected-7047" to be "success or failure" Apr 24 12:58:54.275: INFO: Pod "pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.384676ms Apr 24 12:58:56.310: INFO: Pod "pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052665273s Apr 24 12:58:58.376: INFO: Pod "pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118784078s STEP: Saw pod success Apr 24 12:58:58.376: INFO: Pod "pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d" satisfied condition "success or failure" Apr 24 12:58:58.379: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d container projected-configmap-volume-test: STEP: delete the pod Apr 24 12:58:58.447: INFO: Waiting for pod pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d to disappear Apr 24 12:58:58.543: INFO: Pod pod-projected-configmaps-0418aa54-b5cd-4b18-b9f8-fab9abc3729d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:58:58.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7047" for this suite. Apr 24 12:59:04.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:59:04.645: INFO: namespace projected-7047 deletion completed in 6.097301383s • [SLOW TEST:10.623 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:59:04.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-59a58434-9c6b-47b3-ac63-aff18396b476 STEP: Creating a pod to test consume secrets Apr 24 12:59:04.762: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6" in namespace "projected-1439" to be "success or failure" Apr 24 12:59:04.769: INFO: Pod "pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.674905ms Apr 24 12:59:06.773: INFO: Pod "pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010988114s Apr 24 12:59:08.778: INFO: Pod "pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015651974s STEP: Saw pod success Apr 24 12:59:08.778: INFO: Pod "pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6" satisfied condition "success or failure" Apr 24 12:59:08.781: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6 container projected-secret-volume-test: STEP: delete the pod Apr 24 12:59:08.818: INFO: Waiting for pod pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6 to disappear Apr 24 12:59:08.841: INFO: Pod pod-projected-secrets-94ad6e2a-e9b0-4482-86f2-292e6b3e83a6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:59:08.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1439" for this suite. Apr 24 12:59:14.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:59:14.989: INFO: namespace projected-1439 deletion completed in 6.133008303s • [SLOW TEST:10.343 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:59:14.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 24 12:59:15.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5030' Apr 24 12:59:15.349: INFO: stderr: "" Apr 24 12:59:15.349: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 24 12:59:16.370: INFO: Selector matched 1 pods for map[app:redis] Apr 24 12:59:16.370: INFO: Found 0 / 1 Apr 24 12:59:17.354: INFO: Selector matched 1 pods for map[app:redis] Apr 24 12:59:17.354: INFO: Found 0 / 1 Apr 24 12:59:18.353: INFO: Selector matched 1 pods for map[app:redis] Apr 24 12:59:18.353: INFO: Found 1 / 1 Apr 24 12:59:18.353: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 24 12:59:18.356: INFO: Selector matched 1 pods for map[app:redis] Apr 24 12:59:18.356: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 24 12:59:18.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j6brg redis-master --namespace=kubectl-5030' Apr 24 12:59:18.464: INFO: stderr: "" Apr 24 12:59:18.464: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Apr 12:59:17.928 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Apr 12:59:17.928 # Server started, Redis version 3.2.12\n1:M 24 Apr 12:59:17.928 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Apr 12:59:17.928 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 24 12:59:18.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j6brg redis-master --namespace=kubectl-5030 --tail=1' Apr 24 12:59:18.583: INFO: stderr: "" Apr 24 12:59:18.583: INFO: stdout: "1:M 24 Apr 12:59:17.928 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 24 12:59:18.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j6brg redis-master --namespace=kubectl-5030 --limit-bytes=1' Apr 24 12:59:18.683: INFO: stderr: "" Apr 24 12:59:18.683: INFO: stdout: " " STEP: exposing timestamps Apr 24 12:59:18.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j6brg redis-master --namespace=kubectl-5030 --tail=1 --timestamps' Apr 24 12:59:18.777: INFO: stderr: "" Apr 24 12:59:18.777: INFO: stdout: "2020-04-24T12:59:17.92870079Z 1:M 24 Apr 12:59:17.928 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 24 12:59:21.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j6brg redis-master --namespace=kubectl-5030 --since=1s' Apr 24 12:59:21.379: INFO: stderr: "" Apr 24 12:59:21.379: INFO: stdout: "" Apr 24 12:59:21.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j6brg redis-master --namespace=kubectl-5030 --since=24h' Apr 24 12:59:21.490: INFO: stderr: "" Apr 24 12:59:21.490: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Apr 12:59:17.928 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Apr 12:59:17.928 # Server started, Redis version 3.2.12\n1:M 24 Apr 12:59:17.928 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Apr 12:59:17.928 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 24 12:59:21.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5030' Apr 24 12:59:21.595: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 12:59:21.595: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 24 12:59:21.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5030' Apr 24 12:59:21.795: INFO: stderr: "No resources found.\n" Apr 24 12:59:21.795: INFO: stdout: "" Apr 24 12:59:21.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5030 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 12:59:21.914: INFO: stderr: "" Apr 24 12:59:21.914: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:59:21.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5030" for this suite. Apr 24 12:59:27.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:59:28.007: INFO: namespace kubectl-5030 deletion completed in 6.089626627s • [SLOW TEST:13.018 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:59:28.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 24 12:59:28.061: INFO: Waiting up to 5m0s for pod "downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d" in namespace "downward-api-1986" to be "success or failure" Apr 24 12:59:28.076: INFO: Pod "downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.699188ms Apr 24 12:59:30.080: INFO: Pod "downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019500444s Apr 24 12:59:32.085: INFO: Pod "downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02362717s STEP: Saw pod success Apr 24 12:59:32.085: INFO: Pod "downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d" satisfied condition "success or failure" Apr 24 12:59:32.088: INFO: Trying to get logs from node iruya-worker pod downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d container dapi-container: STEP: delete the pod Apr 24 12:59:32.105: INFO: Waiting for pod downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d to disappear Apr 24 12:59:32.110: INFO: Pod downward-api-d6b678e3-7d5a-4395-ba5f-67f470bca93d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:59:32.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1986" for this suite. Apr 24 12:59:38.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 12:59:38.207: INFO: namespace downward-api-1986 deletion completed in 6.094103772s • [SLOW TEST:10.200 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 12:59:38.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-ba6e5f62-3b74-4490-9370-2b8ab6dd97f1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ba6e5f62-3b74-4490-9370-2b8ab6dd97f1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 12:59:44.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6441" for this suite. Apr 24 13:00:06.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:00:06.434: INFO: namespace configmap-6441 deletion completed in 22.086914023s • [SLOW TEST:28.227 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:00:06.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-1e497208-cfbf-41c7-a4b5-b013ca5129e9 STEP: Creating a pod to test consume secrets Apr 24 13:00:06.565: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1" in namespace "projected-4377" to be "success or failure" Apr 24 13:00:06.575: INFO: Pod "pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113302ms Apr 24 13:00:08.579: INFO: Pod "pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014155016s Apr 24 13:00:10.585: INFO: Pod "pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020356073s STEP: Saw pod success Apr 24 13:00:10.585: INFO: Pod "pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1" satisfied condition "success or failure" Apr 24 13:00:10.588: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1 container secret-volume-test: STEP: delete the pod Apr 24 13:00:10.708: INFO: Waiting for pod pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1 to disappear Apr 24 13:00:10.712: INFO: Pod pod-projected-secrets-259abfda-9a0b-4056-b69b-4211bbbd22c1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:00:10.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4377" for this suite. Apr 24 13:00:16.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:00:16.803: INFO: namespace projected-4377 deletion completed in 6.087857901s • [SLOW TEST:10.368 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:00:16.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:00:16.908: INFO: Create a RollingUpdate DaemonSet Apr 24 13:00:16.912: INFO: Check that daemon pods launch on every node of the cluster Apr 24 13:00:16.916: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:16.922: INFO: Number of nodes with available pods: 0 Apr 24 13:00:16.922: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:00:17.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:17.931: INFO: Number of nodes with available pods: 0 Apr 24 13:00:17.931: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:00:18.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:18.929: INFO: Number of nodes with available pods: 0 Apr 24 13:00:18.929: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:00:19.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:19.930: INFO: Number of nodes with available pods: 0 Apr 24 13:00:19.930: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:00:20.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:20.930: INFO: Number of nodes with available pods: 2 Apr 24 13:00:20.930: INFO: Number of running nodes: 2, number of available pods: 2 Apr 24 13:00:20.930: INFO: Update the DaemonSet to trigger a rollout Apr 24 13:00:20.937: INFO: Updating DaemonSet daemon-set Apr 24 13:00:31.954: INFO: Roll back the DaemonSet before rollout is complete Apr 24 13:00:31.961: INFO: Updating DaemonSet daemon-set Apr 24 13:00:31.961: INFO: Make sure DaemonSet rollback is complete Apr 24 13:00:31.968: INFO: Wrong image for pod: daemon-set-4hz7r. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 24 13:00:31.968: INFO: Pod daemon-set-4hz7r is not available Apr 24 13:00:31.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:32.978: INFO: Wrong image for pod: daemon-set-4hz7r. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 24 13:00:32.978: INFO: Pod daemon-set-4hz7r is not available Apr 24 13:00:32.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:33.979: INFO: Wrong image for pod: daemon-set-4hz7r. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 24 13:00:33.979: INFO: Pod daemon-set-4hz7r is not available Apr 24 13:00:33.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:34.978: INFO: Wrong image for pod: daemon-set-4hz7r. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 24 13:00:34.978: INFO: Pod daemon-set-4hz7r is not available Apr 24 13:00:34.982: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:35.978: INFO: Wrong image for pod: daemon-set-4hz7r. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 24 13:00:35.978: INFO: Pod daemon-set-4hz7r is not available Apr 24 13:00:35.982: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:00:36.978: INFO: Pod daemon-set-phh4f is not available Apr 24 13:00:36.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5032, will wait for the garbage collector to delete the pods Apr 24 13:00:37.044: INFO: Deleting DaemonSet.extensions daemon-set took: 6.333909ms Apr 24 13:00:37.345: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.472656ms Apr 24 13:00:40.549: INFO: Number of nodes with available pods: 0 Apr 24 13:00:40.549: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 13:00:40.555: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5032/daemonsets","resourceVersion":"7174589"},"items":null} Apr 24 13:00:40.557: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5032/pods","resourceVersion":"7174589"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:00:40.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5032" for this suite. Apr 24 13:00:46.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:00:46.659: INFO: namespace daemonsets-5032 deletion completed in 6.089228578s • [SLOW TEST:29.855 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:00:46.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2049 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2049 to expose endpoints map[] Apr 24 13:00:46.818: INFO: successfully validated that service multi-endpoint-test in namespace services-2049 exposes endpoints map[] (30.945475ms elapsed) STEP: Creating pod pod1 in namespace services-2049 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2049 to expose endpoints map[pod1:[100]] Apr 24 13:00:49.921: INFO: successfully validated that service multi-endpoint-test in namespace services-2049 exposes endpoints map[pod1:[100]] (3.097145653s elapsed) STEP: Creating pod pod2 in namespace services-2049 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2049 to expose endpoints map[pod1:[100] pod2:[101]] Apr 24 13:00:53.013: INFO: successfully validated that service multi-endpoint-test in namespace services-2049 exposes endpoints map[pod1:[100] pod2:[101]] (3.087208088s elapsed) STEP: Deleting pod pod1 in namespace services-2049 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2049 to expose endpoints map[pod2:[101]] Apr 24 13:00:54.054: INFO: successfully validated that service multi-endpoint-test in namespace services-2049 exposes endpoints map[pod2:[101]] (1.036302828s elapsed) STEP: Deleting pod pod2 in namespace services-2049 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2049 to expose endpoints map[] Apr 24 13:00:55.112: INFO: successfully validated that service multi-endpoint-test in namespace services-2049 exposes endpoints map[] (1.053742716s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:00:55.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2049" for this suite. Apr 24 13:01:17.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:01:17.346: INFO: namespace services-2049 deletion completed in 22.089879169s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.687 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:01:17.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-121.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-121.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-121.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-121.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-121.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-121.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.27.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.27.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.27.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.27.88_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-121.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-121.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-121.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-121.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-121.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-121.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-121.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.27.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.27.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.27.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.27.88_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 13:01:23.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:23.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:23.534: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:23.537: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:23.567: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:23.570: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:23.589: INFO: Lookups using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b failed for: [wheezy_udp@dns-test-service.dns-121.svc.cluster.local wheezy_tcp@dns-test-service.dns-121.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local] Apr 24 13:01:28.601: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:28.605: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:28.631: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:28.634: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:28.653: INFO: Lookups using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local] Apr 24 13:01:33.603: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:33.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:33.633: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:33.636: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:33.655: INFO: Lookups using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local] Apr 24 13:01:38.602: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:38.605: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:38.635: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:38.638: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:38.657: INFO: Lookups using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local] Apr 24 13:01:43.618: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:43.621: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:43.649: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:43.652: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:43.668: INFO: Lookups using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local] Apr 24 13:01:48.600: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:48.603: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:48.630: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:48.633: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local from pod dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b: the server could not find the requested resource (get pods dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b) Apr 24 13:01:48.651: INFO: Lookups using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-121.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-121.svc.cluster.local] Apr 24 13:01:53.666: INFO: DNS probes using dns-121/dns-test-959c8a4c-3d6a-49b3-adbf-4fb471b35e1b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:01:53.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-121" for this suite. Apr 24 13:02:00.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:02:00.325: INFO: namespace dns-121 deletion completed in 6.209536754s • [SLOW TEST:42.979 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:02:00.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:02:00.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44" in namespace "downward-api-7632" to be "success or failure" Apr 24 13:02:00.429: INFO: Pod "downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.022943ms Apr 24 13:02:02.434: INFO: Pod "downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00728214s Apr 24 13:02:04.437: INFO: Pod "downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010948138s STEP: Saw pod success Apr 24 13:02:04.437: INFO: Pod "downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44" satisfied condition "success or failure" Apr 24 13:02:04.440: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44 container client-container: STEP: delete the pod Apr 24 13:02:04.512: INFO: Waiting for pod downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44 to disappear Apr 24 13:02:04.520: INFO: Pod downwardapi-volume-cb4de59b-4342-4124-984d-399164f25e44 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:02:04.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7632" for this suite. Apr 24 13:02:10.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:02:10.620: INFO: namespace downward-api-7632 deletion completed in 6.09592106s • [SLOW TEST:10.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:02:10.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:02:10.735: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 24 13:02:15.739: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 24 13:02:15.739: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 24 13:02:15.780: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-812,SelfLink:/apis/apps/v1/namespaces/deployment-812/deployments/test-cleanup-deployment,UID:ed860484-f00e-4182-9bbd-3cd1520766f6,ResourceVersion:7174953,Generation:1,CreationTimestamp:2020-04-24 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 24 13:02:15.783: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Apr 24 13:02:15.783: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 24 13:02:15.783: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-812,SelfLink:/apis/apps/v1/namespaces/deployment-812/replicasets/test-cleanup-controller,UID:f980e7ca-e326-4285-b96d-1e0a3ba3c60e,ResourceVersion:7174954,Generation:1,CreationTimestamp:2020-04-24 13:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ed860484-f00e-4182-9bbd-3cd1520766f6 0xc001240c3f 0xc001240c50}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 24 13:02:15.842: INFO: Pod "test-cleanup-controller-9nd2f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-9nd2f,GenerateName:test-cleanup-controller-,Namespace:deployment-812,SelfLink:/api/v1/namespaces/deployment-812/pods/test-cleanup-controller-9nd2f,UID:48865972-d053-444b-b875-111c5cc1df04,ResourceVersion:7174947,Generation:0,CreationTimestamp:2020-04-24 13:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f980e7ca-e326-4285-b96d-1e0a3ba3c60e 0xc0013ab9c7 0xc0013ab9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wcqm9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wcqm9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wcqm9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0013aba40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013aba60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:02:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:02:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:02:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:02:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.15,StartTime:2020-04-24 13:02:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:02:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6913976bbaed10a19c73d2fb15c5e46b24196ea099579e68fa46908c4404df52}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:02:15.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-812" for this suite. Apr 24 13:02:21.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:02:22.029: INFO: namespace deployment-812 deletion completed in 6.156574995s • [SLOW TEST:11.409 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:02:22.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:02:27.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6962" for this suite. Apr 24 13:02:49.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:02:49.345: INFO: namespace replication-controller-6962 deletion completed in 22.097301734s • [SLOW TEST:27.316 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:02:49.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 24 13:02:49.412: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix990970254/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:02:49.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2960" for this suite. Apr 24 13:02:55.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:02:55.590: INFO: namespace kubectl-2960 deletion completed in 6.090971061s • [SLOW TEST:6.245 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:02:55.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e970d954-1453-4fd2-a39d-330e1083ad94 STEP: Creating a pod to test consume configMaps Apr 24 13:02:55.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18" in namespace "configmap-3577" to be "success or failure" Apr 24 13:02:55.682: INFO: Pod "pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545058ms Apr 24 13:02:57.687: INFO: Pod "pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00859471s Apr 24 13:02:59.691: INFO: Pod "pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01312491s STEP: Saw pod success Apr 24 13:02:59.691: INFO: Pod "pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18" satisfied condition "success or failure" Apr 24 13:02:59.694: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18 container configmap-volume-test: STEP: delete the pod Apr 24 13:02:59.730: INFO: Waiting for pod pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18 to disappear Apr 24 13:02:59.762: INFO: Pod pod-configmaps-bd72c791-91ee-4246-936e-4670efa1fe18 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:02:59.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3577" for this suite. Apr 24 13:03:05.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:03:05.854: INFO: namespace configmap-3577 deletion completed in 6.088432397s • [SLOW TEST:10.263 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:03:05.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:03:36.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9602" for this suite. Apr 24 13:03:42.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:03:42.497: INFO: namespace container-runtime-9602 deletion completed in 6.103108782s • [SLOW TEST:36.643 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:03:42.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 24 13:03:42.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:42.621: INFO: Number of nodes with available pods: 0 Apr 24 13:03:42.621: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:03:43.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:43.630: INFO: Number of nodes with available pods: 0 Apr 24 13:03:43.630: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:03:44.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:44.630: INFO: Number of nodes with available pods: 0 Apr 24 13:03:44.630: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:03:45.638: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:45.645: INFO: Number of nodes with available pods: 0 Apr 24 13:03:45.646: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:03:46.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:46.630: INFO: Number of nodes with available pods: 1 Apr 24 13:03:46.630: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:03:47.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:47.630: INFO: Number of nodes with available pods: 2 Apr 24 13:03:47.630: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 24 13:03:47.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:03:47.653: INFO: Number of nodes with available pods: 2 Apr 24 13:03:47.653: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8241, will wait for the garbage collector to delete the pods Apr 24 13:03:48.741: INFO: Deleting DaemonSet.extensions daemon-set took: 6.153767ms Apr 24 13:03:49.041: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.45157ms Apr 24 13:04:01.945: INFO: Number of nodes with available pods: 0 Apr 24 13:04:01.945: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 13:04:01.947: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8241/daemonsets","resourceVersion":"7175391"},"items":null} Apr 24 13:04:01.951: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8241/pods","resourceVersion":"7175391"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:04:01.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8241" for this suite. Apr 24 13:04:07.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:04:08.072: INFO: namespace daemonsets-8241 deletion completed in 6.106121001s • [SLOW TEST:25.575 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:04:08.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-737 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 13:04:08.133: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 13:04:30.283: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.219:8080/dial?request=hostName&protocol=http&host=10.244.2.218&port=8080&tries=1'] Namespace:pod-network-test-737 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:04:30.283: INFO: >>> kubeConfig: /root/.kube/config I0424 13:04:30.320428 6 log.go:172] (0xc0013ea580) (0xc002f33d60) Create stream I0424 13:04:30.320479 6 log.go:172] (0xc0013ea580) (0xc002f33d60) Stream added, broadcasting: 1 I0424 13:04:30.322955 6 log.go:172] (0xc0013ea580) Reply frame received for 1 I0424 13:04:30.322997 6 log.go:172] (0xc0013ea580) (0xc00039f180) Create stream I0424 13:04:30.323011 6 log.go:172] (0xc0013ea580) (0xc00039f180) Stream added, broadcasting: 3 I0424 13:04:30.323817 6 log.go:172] (0xc0013ea580) Reply frame received for 3 I0424 13:04:30.323854 6 log.go:172] (0xc0013ea580) (0xc0016ba000) Create stream I0424 13:04:30.323865 6 log.go:172] (0xc0013ea580) (0xc0016ba000) Stream added, broadcasting: 5 I0424 13:04:30.324810 6 log.go:172] (0xc0013ea580) Reply frame received for 5 I0424 13:04:30.395418 6 log.go:172] (0xc0013ea580) Data frame received for 3 I0424 13:04:30.395459 6 log.go:172] (0xc00039f180) (3) Data frame handling I0424 13:04:30.395476 6 log.go:172] (0xc00039f180) (3) Data frame sent I0424 13:04:30.395570 6 log.go:172] (0xc0013ea580) Data frame received for 3 I0424 13:04:30.395588 6 log.go:172] (0xc00039f180) (3) Data frame handling I0424 13:04:30.395906 6 log.go:172] (0xc0013ea580) Data frame received for 5 I0424 13:04:30.395947 6 log.go:172] (0xc0016ba000) (5) Data frame handling I0424 13:04:30.397754 6 log.go:172] (0xc0013ea580) Data frame received for 1 I0424 13:04:30.397770 6 log.go:172] (0xc002f33d60) (1) Data frame handling I0424 13:04:30.397793 6 log.go:172] (0xc002f33d60) (1) Data frame sent I0424 13:04:30.397822 6 log.go:172] (0xc0013ea580) (0xc002f33d60) Stream removed, broadcasting: 1 I0424 13:04:30.397866 6 log.go:172] (0xc0013ea580) Go away received I0424 13:04:30.397905 6 log.go:172] (0xc0013ea580) (0xc002f33d60) Stream removed, broadcasting: 1 I0424 13:04:30.397925 6 log.go:172] (0xc0013ea580) (0xc00039f180) Stream removed, broadcasting: 3 I0424 13:04:30.397938 6 log.go:172] (0xc0013ea580) (0xc0016ba000) Stream removed, broadcasting: 5 Apr 24 13:04:30.397: INFO: Waiting for endpoints: map[] Apr 24 13:04:30.401: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.219:8080/dial?request=hostName&protocol=http&host=10.244.1.21&port=8080&tries=1'] Namespace:pod-network-test-737 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:04:30.401: INFO: >>> kubeConfig: /root/.kube/config I0424 13:04:30.427319 6 log.go:172] (0xc0019a6420) (0xc001c66460) Create stream I0424 13:04:30.427347 6 log.go:172] (0xc0019a6420) (0xc001c66460) Stream added, broadcasting: 1 I0424 13:04:30.430167 6 log.go:172] (0xc0019a6420) Reply frame received for 1 I0424 13:04:30.430218 6 log.go:172] (0xc0019a6420) (0xc002f33e00) Create stream I0424 13:04:30.430234 6 log.go:172] (0xc0019a6420) (0xc002f33e00) Stream added, broadcasting: 3 I0424 13:04:30.431179 6 log.go:172] (0xc0019a6420) Reply frame received for 3 I0424 13:04:30.431219 6 log.go:172] (0xc0019a6420) (0xc002f33ea0) Create stream I0424 13:04:30.431232 6 log.go:172] (0xc0019a6420) (0xc002f33ea0) Stream added, broadcasting: 5 I0424 13:04:30.432166 6 log.go:172] (0xc0019a6420) Reply frame received for 5 I0424 13:04:30.504793 6 log.go:172] (0xc0019a6420) Data frame received for 3 I0424 13:04:30.504842 6 log.go:172] (0xc002f33e00) (3) Data frame handling I0424 13:04:30.504877 6 log.go:172] (0xc002f33e00) (3) Data frame sent I0424 13:04:30.505308 6 log.go:172] (0xc0019a6420) Data frame received for 3 I0424 13:04:30.505353 6 log.go:172] (0xc002f33e00) (3) Data frame handling I0424 13:04:30.505493 6 log.go:172] (0xc0019a6420) Data frame received for 5 I0424 13:04:30.505514 6 log.go:172] (0xc002f33ea0) (5) Data frame handling I0424 13:04:30.507354 6 log.go:172] (0xc0019a6420) Data frame received for 1 I0424 13:04:30.507373 6 log.go:172] (0xc001c66460) (1) Data frame handling I0424 13:04:30.507396 6 log.go:172] (0xc001c66460) (1) Data frame sent I0424 13:04:30.507411 6 log.go:172] (0xc0019a6420) (0xc001c66460) Stream removed, broadcasting: 1 I0424 13:04:30.507522 6 log.go:172] (0xc0019a6420) (0xc001c66460) Stream removed, broadcasting: 1 I0424 13:04:30.507549 6 log.go:172] (0xc0019a6420) (0xc002f33e00) Stream removed, broadcasting: 3 I0424 13:04:30.507586 6 log.go:172] (0xc0019a6420) Go away received I0424 13:04:30.507722 6 log.go:172] (0xc0019a6420) (0xc002f33ea0) Stream removed, broadcasting: 5 Apr 24 13:04:30.507: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:04:30.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-737" for this suite. Apr 24 13:04:48.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:04:48.619: INFO: namespace pod-network-test-737 deletion completed in 18.107561288s • [SLOW TEST:40.545 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:04:48.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:04:48.669: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd" in namespace "projected-2354" to be "success or failure" Apr 24 13:04:48.710: INFO: Pod "downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.984427ms Apr 24 13:04:50.714: INFO: Pod "downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044894502s Apr 24 13:04:52.719: INFO: Pod "downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049494372s STEP: Saw pod success Apr 24 13:04:52.719: INFO: Pod "downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd" satisfied condition "success or failure" Apr 24 13:04:52.722: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd container client-container: STEP: delete the pod Apr 24 13:04:52.790: INFO: Waiting for pod downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd to disappear Apr 24 13:04:52.802: INFO: Pod downwardapi-volume-1304059e-0d32-48d0-a4e6-cbd13331abfd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:04:52.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2354" for this suite. Apr 24 13:05:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:05:00.930: INFO: namespace projected-2354 deletion completed in 8.1249822s • [SLOW TEST:12.311 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:05:00.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:05:01.688: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 24 13:05:01.696: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:01.700: INFO: Number of nodes with available pods: 0 Apr 24 13:05:01.700: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:02.705: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:02.708: INFO: Number of nodes with available pods: 0 Apr 24 13:05:02.708: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:03.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:03.709: INFO: Number of nodes with available pods: 0 Apr 24 13:05:03.709: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:04.735: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:04.739: INFO: Number of nodes with available pods: 0 Apr 24 13:05:04.739: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:05.717: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:05.720: INFO: Number of nodes with available pods: 1 Apr 24 13:05:05.720: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:06.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:06.710: INFO: Number of nodes with available pods: 2 Apr 24 13:05:06.710: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 24 13:05:06.737: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:06.737: INFO: Wrong image for pod: daemon-set-7mbnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:06.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:07.763: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:07.763: INFO: Wrong image for pod: daemon-set-7mbnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:07.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:08.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:08.764: INFO: Wrong image for pod: daemon-set-7mbnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:08.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:09.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:09.764: INFO: Wrong image for pod: daemon-set-7mbnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:09.764: INFO: Pod daemon-set-7mbnn is not available Apr 24 13:05:09.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:10.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:10.764: INFO: Wrong image for pod: daemon-set-7mbnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:10.764: INFO: Pod daemon-set-7mbnn is not available Apr 24 13:05:10.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:11.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:11.764: INFO: Wrong image for pod: daemon-set-7mbnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:11.764: INFO: Pod daemon-set-7mbnn is not available Apr 24 13:05:11.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:12.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:12.764: INFO: Pod daemon-set-sg7dz is not available Apr 24 13:05:12.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:13.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:13.764: INFO: Pod daemon-set-sg7dz is not available Apr 24 13:05:13.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:14.765: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:14.765: INFO: Pod daemon-set-sg7dz is not available Apr 24 13:05:14.770: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:15.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:15.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:16.774: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:16.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:17.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:17.764: INFO: Pod daemon-set-7l75t is not available Apr 24 13:05:17.783: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:18.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:18.764: INFO: Pod daemon-set-7l75t is not available Apr 24 13:05:18.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:19.771: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:19.771: INFO: Pod daemon-set-7l75t is not available Apr 24 13:05:19.774: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:20.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:20.764: INFO: Pod daemon-set-7l75t is not available Apr 24 13:05:20.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:21.764: INFO: Wrong image for pod: daemon-set-7l75t. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 24 13:05:21.764: INFO: Pod daemon-set-7l75t is not available Apr 24 13:05:21.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:22.764: INFO: Pod daemon-set-59vdq is not available Apr 24 13:05:22.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 24 13:05:22.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:22.776: INFO: Number of nodes with available pods: 1 Apr 24 13:05:22.776: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:23.928: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:23.931: INFO: Number of nodes with available pods: 1 Apr 24 13:05:23.931: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:24.782: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:24.786: INFO: Number of nodes with available pods: 1 Apr 24 13:05:24.786: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:25.796: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:25.800: INFO: Number of nodes with available pods: 1 Apr 24 13:05:25.800: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:05:26.781: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:05:26.784: INFO: Number of nodes with available pods: 2 Apr 24 13:05:26.784: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9504, will wait for the garbage collector to delete the pods Apr 24 13:05:26.856: INFO: Deleting DaemonSet.extensions daemon-set took: 6.399297ms Apr 24 13:05:27.156: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.286926ms Apr 24 13:05:32.260: INFO: Number of nodes with available pods: 0 Apr 24 13:05:32.260: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 13:05:32.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9504/daemonsets","resourceVersion":"7175759"},"items":null} Apr 24 13:05:32.265: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9504/pods","resourceVersion":"7175759"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:05:32.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9504" for this suite. Apr 24 13:05:38.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:05:38.361: INFO: namespace daemonsets-9504 deletion completed in 6.082607943s • [SLOW TEST:37.430 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:05:38.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 24 13:05:42.478: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-20e3d86f-b994-4d06-8d3a-1b54865b2e18,GenerateName:,Namespace:events-7030,SelfLink:/api/v1/namespaces/events-7030/pods/send-events-20e3d86f-b994-4d06-8d3a-1b54865b2e18,UID:b2d05f9b-3bf1-4dbb-b19b-c4c5b3f020b1,ResourceVersion:7175819,Generation:0,CreationTimestamp:2020-04-24 13:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 438108000,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x2h4c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x2h4c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-x2h4c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4ca10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4ca30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:05:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:05:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:05:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.25,StartTime:2020-04-24 13:05:38 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-24 13:05:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://d66bd247cc0a15d4d351ce0d66768aa1940a7dd68e49653d00ad8a82dc8643c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 24 13:05:44.484: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 24 13:05:46.489: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:05:46.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7030" for this suite. Apr 24 13:06:24.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:06:24.622: INFO: namespace events-7030 deletion completed in 38.104282522s • [SLOW TEST:46.261 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:06:24.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3ab0d922-f346-4977-bac4-a27912d7a334 STEP: Creating a pod to test consume configMaps Apr 24 13:06:24.686: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4" in namespace "projected-7547" to be "success or failure" Apr 24 13:06:24.717: INFO: Pod "pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.169325ms Apr 24 13:06:26.721: INFO: Pod "pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034828896s Apr 24 13:06:28.726: INFO: Pod "pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039598646s STEP: Saw pod success Apr 24 13:06:28.726: INFO: Pod "pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4" satisfied condition "success or failure" Apr 24 13:06:28.729: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4 container projected-configmap-volume-test: STEP: delete the pod Apr 24 13:06:28.787: INFO: Waiting for pod pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4 to disappear Apr 24 13:06:28.793: INFO: Pod pod-projected-configmaps-a0986f1e-eaf5-42a2-b4b9-3bddeaad9be4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:06:28.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7547" for this suite. Apr 24 13:06:34.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:06:34.883: INFO: namespace projected-7547 deletion completed in 6.086822574s • [SLOW TEST:10.261 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:06:34.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1734/configmap-test-7ac866a2-80bb-4c8e-a65b-601a6773c0e4 STEP: Creating a pod to test consume configMaps Apr 24 13:06:34.957: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84" in namespace "configmap-1734" to be "success or failure" Apr 24 13:06:34.960: INFO: Pod "pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744782ms Apr 24 13:06:36.993: INFO: Pod "pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035901598s Apr 24 13:06:38.997: INFO: Pod "pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040282125s STEP: Saw pod success Apr 24 13:06:38.998: INFO: Pod "pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84" satisfied condition "success or failure" Apr 24 13:06:39.000: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84 container env-test: STEP: delete the pod Apr 24 13:06:39.033: INFO: Waiting for pod pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84 to disappear Apr 24 13:06:39.050: INFO: Pod pod-configmaps-6c74d0fe-fad6-44ce-a576-66c0b89cac84 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:06:39.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1734" for this suite. Apr 24 13:06:45.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:06:45.150: INFO: namespace configmap-1734 deletion completed in 6.09583654s • [SLOW TEST:10.266 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:06:45.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-15770cff-fb70-4f0f-b843-494a56bccb5e STEP: Creating a pod to test consume secrets Apr 24 13:06:45.244: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524" in namespace "projected-3732" to be "success or failure" Apr 24 13:06:45.248: INFO: Pod "pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524": Phase="Pending", Reason="", readiness=false. Elapsed: 3.229766ms Apr 24 13:06:47.255: INFO: Pod "pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010610236s Apr 24 13:06:49.259: INFO: Pod "pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014453279s STEP: Saw pod success Apr 24 13:06:49.259: INFO: Pod "pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524" satisfied condition "success or failure" Apr 24 13:06:49.262: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524 container projected-secret-volume-test: STEP: delete the pod Apr 24 13:06:49.294: INFO: Waiting for pod pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524 to disappear Apr 24 13:06:49.315: INFO: Pod pod-projected-secrets-0815b8c9-47da-4df1-a509-ab5868c07524 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:06:49.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3732" for this suite. Apr 24 13:06:55.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:06:55.432: INFO: namespace projected-3732 deletion completed in 6.113824209s • [SLOW TEST:10.282 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:06:55.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 24 13:06:55.498: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:07:02.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4090" for this suite. Apr 24 13:07:24.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:07:24.210: INFO: namespace init-container-4090 deletion completed in 22.084001788s • [SLOW TEST:28.777 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:07:24.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:07:30.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-715" for this suite. Apr 24 13:07:36.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:07:36.577: INFO: namespace namespaces-715 deletion completed in 6.113064959s STEP: Destroying namespace "nsdeletetest-4804" for this suite. Apr 24 13:07:36.579: INFO: Namespace nsdeletetest-4804 was already deleted STEP: Destroying namespace "nsdeletetest-1407" for this suite. Apr 24 13:07:42.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:07:42.667: INFO: namespace nsdeletetest-1407 deletion completed in 6.087945252s • [SLOW TEST:18.458 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:07:42.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:07:42.751: INFO: Creating deployment "nginx-deployment" Apr 24 13:07:42.755: INFO: Waiting for observed generation 1 Apr 24 13:07:44.778: INFO: Waiting for all required pods to come up Apr 24 13:07:44.782: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 24 13:07:52.793: INFO: Waiting for deployment "nginx-deployment" to complete Apr 24 13:07:52.799: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 24 13:07:52.806: INFO: Updating deployment nginx-deployment Apr 24 13:07:52.806: INFO: Waiting for observed generation 2 Apr 24 13:07:54.859: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 24 13:07:54.861: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 24 13:07:54.863: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 24 13:07:54.867: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 24 13:07:54.867: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 24 13:07:54.869: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 24 13:07:54.872: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 24 13:07:54.872: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 24 13:07:54.876: INFO: Updating deployment nginx-deployment Apr 24 13:07:54.876: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 24 13:07:54.898: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 24 13:07:54.916: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 24 13:07:55.068: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7066,SelfLink:/apis/apps/v1/namespaces/deployment-7066/deployments/nginx-deployment,UID:aa574663-bc62-4724-bd8f-38c1193a2bb0,ResourceVersion:7176422,Generation:3,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-24 13:07:53 +0000 UTC 2020-04-24 13:07:42 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-24 13:07:54 +0000 UTC 2020-04-24 13:07:54 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 24 13:07:55.182: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7066,SelfLink:/apis/apps/v1/namespaces/deployment-7066/replicasets/nginx-deployment-55fb7cb77f,UID:5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e,ResourceVersion:7176473,Generation:3,CreationTimestamp:2020-04-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment aa574663-bc62-4724-bd8f-38c1193a2bb0 0xc0027458e7 0xc0027458e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 24 13:07:55.182: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 24 13:07:55.182: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7066,SelfLink:/apis/apps/v1/namespaces/deployment-7066/replicasets/nginx-deployment-7b8c6f4498,UID:d87d507c-6db7-459b-8c60-df1d3003a40a,ResourceVersion:7176468,Generation:3,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment aa574663-bc62-4724-bd8f-38c1193a2bb0 0xc0027459b7 0xc0027459b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 24 13:07:55.281: INFO: Pod "nginx-deployment-55fb7cb77f-5j4k7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5j4k7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-5j4k7,UID:84040b57-96cc-4458-95e9-67f79b1753cf,ResourceVersion:7176444,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc001b55787 0xc001b55788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b55870} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b55890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.281: INFO: Pod "nginx-deployment-55fb7cb77f-6hm7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6hm7f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-6hm7f,UID:93c974da-f259-4d20-adad-879c0abe1831,ResourceVersion:7176401,Generation:0,CreationTimestamp:2020-04-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc001b55997 0xc001b55998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b55a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b55a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-24 13:07:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.281: INFO: Pod "nginx-deployment-55fb7cb77f-6v4w5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6v4w5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-6v4w5,UID:4c9b2547-ecd5-48fe-afd0-869a24e55fbe,ResourceVersion:7176448,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc001b55bf0 0xc001b55bf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b55d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b55d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.282: INFO: Pod "nginx-deployment-55fb7cb77f-8pzfl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8pzfl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-8pzfl,UID:0dc66da8-ee89-4303-89da-f5b936bea2a8,ResourceVersion:7176453,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc001b55e87 0xc001b55e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b55f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b55f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.282: INFO: Pod "nginx-deployment-55fb7cb77f-9rgcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9rgcs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-9rgcs,UID:27cdb479-0534-4572-a0be-930832667daf,ResourceVersion:7176403,Generation:0,CreationTimestamp:2020-04-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc001b55fa7 0xc001b55fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-24 13:07:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.282: INFO: Pod "nginx-deployment-55fb7cb77f-bz42r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bz42r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-bz42r,UID:2aaf3b04-3c87-46ee-a4dd-496e4faae65a,ResourceVersion:7176385,Generation:0,CreationTimestamp:2020-04-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc002690110 0xc002690111}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026901b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-24 13:07:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.282: INFO: Pod "nginx-deployment-55fb7cb77f-cj8t4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cj8t4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-cj8t4,UID:aad414ea-28df-4d30-be56-1749e4351777,ResourceVersion:7176407,Generation:0,CreationTimestamp:2020-04-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc002690280 0xc002690281}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-24 13:07:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.282: INFO: Pod "nginx-deployment-55fb7cb77f-cmqh4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cmqh4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-cmqh4,UID:6aa547af-9a37-428a-a0dd-e175e3bc2b32,ResourceVersion:7176459,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc0026903f0 0xc0026903f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.282: INFO: Pod "nginx-deployment-55fb7cb77f-dw8mh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dw8mh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-dw8mh,UID:12971743-317b-4679-8815-814263893f94,ResourceVersion:7176455,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc002690517 0xc002690518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026905b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.283: INFO: Pod "nginx-deployment-55fb7cb77f-mlcdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mlcdj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-mlcdj,UID:c375b222-fdad-459f-8682-23a6295a129d,ResourceVersion:7176479,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc0026906e7 0xc0026906e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026907c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026907f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-24 13:07:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.283: INFO: Pod "nginx-deployment-55fb7cb77f-nmx8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nmx8v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-nmx8v,UID:0567ea99-22a1-46c1-a443-643c0e1324de,ResourceVersion:7176461,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc002690950 0xc002690951}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.283: INFO: Pod "nginx-deployment-55fb7cb77f-pdswb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pdswb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-pdswb,UID:345219d7-dead-46be-8d3d-fd1f9321d085,ResourceVersion:7176383,Generation:0,CreationTimestamp:2020-04-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc002690ab7 0xc002690ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-24 13:07:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.283: INFO: Pod "nginx-deployment-55fb7cb77f-zz8dd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zz8dd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-55fb7cb77f-zz8dd,UID:76522311-e567-481e-8aa2-0fed791c11cc,ResourceVersion:7176466,Generation:0,CreationTimestamp:2020-04-24 13:07:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5ca41425-fc3c-4d6d-8bea-0b61fcf4b84e 0xc002690d60 0xc002690d61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.283: INFO: Pod "nginx-deployment-7b8c6f4498-5xj9x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5xj9x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-5xj9x,UID:b48561ac-e8e3-46a3-9b41-c1228d3f8483,ResourceVersion:7176334,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002690ef7 0xc002690ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002690fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002690fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.32,StartTime:2020-04-24 13:07:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://58e104f7b0c9445d6410990ced84249329410c577c1ecce71bab52e361d9516b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-65mrw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-65mrw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-65mrw,UID:a0da2dbe-73ec-4376-b20e-2e95a8b10d0d,ResourceVersion:7176308,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc0026911b7 0xc0026911b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002691290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026912b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.29,StartTime:2020-04-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://62b634b6a0ba7f67a75ff93756eff53dec344da7cac485e6eaef122bbf6d6be9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-6cntn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6cntn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-6cntn,UID:85f5de18-399c-4987-a710-f8c30b8d439d,ResourceVersion:7176454,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc0026913b7 0xc0026913b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002691430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002691450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-8hwwr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8hwwr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-8hwwr,UID:ea962bdd-ddfd-464a-b085-e6904995ca01,ResourceVersion:7176442,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002691657 0xc002691658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026916d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002691780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-8nx86" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8nx86,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-8nx86,UID:73d78cbe-483b-40c7-97a0-85a1dded4c14,ResourceVersion:7176464,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002691847 0xc002691848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002691920} {node.kubernetes.io/unreachable Exists NoExecute 0xc002691960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-24 13:07:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-8xvhs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8xvhs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-8xvhs,UID:64dfd626-b528-4dc8-8c31-0839f539a016,ResourceVersion:7176460,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002691a87 0xc002691a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002691ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002691bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-b2jrw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b2jrw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-b2jrw,UID:34237663-7e34-4f7f-adf7-bed2d86ab804,ResourceVersion:7176474,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002691d17 0xc002691d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002691d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002691db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-24 13:07:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.284: INFO: Pod "nginx-deployment-7b8c6f4498-bxk74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bxk74,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-bxk74,UID:f1365ac0-2f96-486f-97c4-0028ae320d85,ResourceVersion:7176449,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002691e77 0xc002691e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002691ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002691fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.287: INFO: Pod "nginx-deployment-7b8c6f4498-fqc6p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fqc6p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-fqc6p,UID:15200c8e-9892-4514-890b-7105f9822c9b,ResourceVersion:7176330,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c067 0xc002a4c068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.31,StartTime:2020-04-24 13:07:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://07378cfe995c32d73af3a39aaad1dc0055e8a8024974c41d074074ff39456946}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.288: INFO: Pod "nginx-deployment-7b8c6f4498-hqcj5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hqcj5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-hqcj5,UID:e23f7a6b-d0af-41d6-9bde-912575db0bd9,ResourceVersion:7176319,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c1d7 0xc002a4c1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c250} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.224,StartTime:2020-04-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ea394e2bf90ac74e1ec6d131589e370c74742672d19916ef446b00c210483293}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.288: INFO: Pod "nginx-deployment-7b8c6f4498-lq77z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lq77z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-lq77z,UID:e3151414-0fcc-4621-9466-560480880a8e,ResourceVersion:7176352,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c347 0xc002a4c348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.227,StartTime:2020-04-24 13:07:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ce46cb4db3dfb97b9c185bf0d1ad0e81c5754b39481fe41b4ba13c4999dfb2c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.288: INFO: Pod "nginx-deployment-7b8c6f4498-q6rzd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q6rzd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-q6rzd,UID:0e452709-75f7-4ebe-b6a3-018cfe7e079f,ResourceVersion:7176432,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c4b7 0xc002a4c4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c530} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.288: INFO: Pod "nginx-deployment-7b8c6f4498-qjxbm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qjxbm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-qjxbm,UID:094ea490-e66a-4d8a-9722-fced8a86605f,ResourceVersion:7176349,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c5d7 0xc002a4c5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.225,StartTime:2020-04-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fac4538966daab76654379920183d0450490544576de39f5b8e5bd14c0a5cbe1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.289: INFO: Pod "nginx-deployment-7b8c6f4498-qnjw2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qnjw2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-qnjw2,UID:e2979be8-6bc9-4f45-b5a4-9720840d133e,ResourceVersion:7176458,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c747 0xc002a4c748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c7c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.289: INFO: Pod "nginx-deployment-7b8c6f4498-rb6s2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rb6s2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-rb6s2,UID:f731cc85-0d7f-497a-8b5b-7b687bd047eb,ResourceVersion:7176316,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c867 0xc002a4c868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4c8e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4c900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.28,StartTime:2020-04-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://32049326fc716bdc1133fb86cf4a80375959cc33d393b786fee9d199031e44c9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.289: INFO: Pod "nginx-deployment-7b8c6f4498-s5pcg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s5pcg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-s5pcg,UID:38543efe-df79-43cc-9fd1-21b593ee58ed,ResourceVersion:7176312,Generation:0,CreationTimestamp:2020-04-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4c9d7 0xc002a4c9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4ca50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4ca70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.30,StartTime:2020-04-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-24 13:07:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c6f3332b467dabc219e8f771f64655fc4113ed288846c84d74e3bde10a070175}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.289: INFO: Pod "nginx-deployment-7b8c6f4498-t86fk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t86fk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-t86fk,UID:62fa8055-1b19-464c-ab9c-f161679c4a73,ResourceVersion:7176450,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4cb57 0xc002a4cb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4cbd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4cbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.289: INFO: Pod "nginx-deployment-7b8c6f4498-t9g9n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t9g9n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-t9g9n,UID:82339c76-af31-4988-a153-61c610ce522b,ResourceVersion:7176457,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4cc77 0xc002a4cc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4ccf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4cd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.289: INFO: Pod "nginx-deployment-7b8c6f4498-z7tvn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z7tvn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-z7tvn,UID:65b44b18-59e5-4d28-a298-75341497216f,ResourceVersion:7176447,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4cd97 0xc002a4cd98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4ce10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4ce30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 24 13:07:55.290: INFO: Pod "nginx-deployment-7b8c6f4498-z9lcc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z9lcc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7066,SelfLink:/api/v1/namespaces/deployment-7066/pods/nginx-deployment-7b8c6f4498-z9lcc,UID:fbf6ac58-8cfb-40b2-b52d-500a9d016c73,ResourceVersion:7176456,Generation:0,CreationTimestamp:2020-04-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d87d507c-6db7-459b-8c60-df1d3003a40a 0xc002a4ceb7 0xc002a4ceb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-55z8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-55z8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-55z8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a4cf30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a4cf50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:07:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:07:55.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7066" for this suite. Apr 24 13:08:13.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:08:13.697: INFO: namespace deployment-7066 deletion completed in 18.344206249s • [SLOW TEST:31.030 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:08:13.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 24 13:08:23.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:23.950: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:25.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:25.954: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:27.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:27.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:29.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:29.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:31.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:31.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:33.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:33.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:35.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:35.954: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:37.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:37.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:39.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:39.956: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:41.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:41.953: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:43.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:43.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:45.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:45.954: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:47.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:47.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:49.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:49.955: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 13:08:51.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 13:08:51.955: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:08:51.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9013" for this suite. Apr 24 13:09:13.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:09:14.062: INFO: namespace container-lifecycle-hook-9013 deletion completed in 22.096813521s • [SLOW TEST:60.365 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:09:14.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 24 13:09:14.123: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 13:09:14.129: INFO: Waiting for terminating namespaces to be deleted... Apr 24 13:09:14.131: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 24 13:09:14.135: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 24 13:09:14.135: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 13:09:14.135: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 24 13:09:14.135: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 13:09:14.135: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 24 13:09:14.142: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 24 13:09:14.142: INFO: Container coredns ready: true, restart count 0 Apr 24 13:09:14.142: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 24 13:09:14.142: INFO: Container coredns ready: true, restart count 0 Apr 24 13:09:14.142: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 24 13:09:14.142: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 13:09:14.142: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 24 13:09:14.142: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1608c36fee8e98fe], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:09:15.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6756" for this suite. Apr 24 13:09:21.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:09:21.272: INFO: namespace sched-pred-6756 deletion completed in 6.105549648s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.210 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:09:21.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 24 13:09:21.339: INFO: Waiting up to 5m0s for pod "downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2" in namespace "downward-api-5342" to be "success or failure" Apr 24 13:09:21.379: INFO: Pod "downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.31463ms Apr 24 13:09:23.383: INFO: Pod "downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04348856s Apr 24 13:09:25.387: INFO: Pod "downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04772485s STEP: Saw pod success Apr 24 13:09:25.387: INFO: Pod "downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2" satisfied condition "success or failure" Apr 24 13:09:25.390: INFO: Trying to get logs from node iruya-worker pod downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2 container dapi-container: STEP: delete the pod Apr 24 13:09:25.410: INFO: Waiting for pod downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2 to disappear Apr 24 13:09:25.420: INFO: Pod downward-api-832c2f66-a574-4f72-b8ca-68b89b3e1ad2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:09:25.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5342" for this suite. Apr 24 13:09:31.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:09:31.600: INFO: namespace downward-api-5342 deletion completed in 6.176419231s • [SLOW TEST:10.328 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:09:31.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 24 13:09:43.739: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 13:09:43.744: INFO: Pod pod-with-poststart-http-hook still exists Apr 24 13:09:45.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 13:09:45.749: INFO: Pod pod-with-poststart-http-hook still exists Apr 24 13:09:47.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 13:09:47.749: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:09:47.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6431" for this suite. Apr 24 13:10:09.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:10:09.842: INFO: namespace container-lifecycle-hook-6431 deletion completed in 22.088267505s • [SLOW TEST:38.242 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:10:09.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3482 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 13:10:09.899: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 13:10:34.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:8080/dial?request=hostName&protocol=udp&host=10.244.2.245&port=8081&tries=1'] Namespace:pod-network-test-3482 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:10:34.090: INFO: >>> kubeConfig: /root/.kube/config I0424 13:10:34.123176 6 log.go:172] (0xc001e68210) (0xc002429f40) Create stream I0424 13:10:34.123206 6 log.go:172] (0xc001e68210) (0xc002429f40) Stream added, broadcasting: 1 I0424 13:10:34.124946 6 log.go:172] (0xc001e68210) Reply frame received for 1 I0424 13:10:34.125001 6 log.go:172] (0xc001e68210) (0xc002f332c0) Create stream I0424 13:10:34.125013 6 log.go:172] (0xc001e68210) (0xc002f332c0) Stream added, broadcasting: 3 I0424 13:10:34.126299 6 log.go:172] (0xc001e68210) Reply frame received for 3 I0424 13:10:34.126328 6 log.go:172] (0xc001e68210) (0xc000a7e000) Create stream I0424 13:10:34.126334 6 log.go:172] (0xc001e68210) (0xc000a7e000) Stream added, broadcasting: 5 I0424 13:10:34.127230 6 log.go:172] (0xc001e68210) Reply frame received for 5 I0424 13:10:34.214410 6 log.go:172] (0xc001e68210) Data frame received for 3 I0424 13:10:34.214446 6 log.go:172] (0xc002f332c0) (3) Data frame handling I0424 13:10:34.214463 6 log.go:172] (0xc002f332c0) (3) Data frame sent I0424 13:10:34.214827 6 log.go:172] (0xc001e68210) Data frame received for 3 I0424 13:10:34.214854 6 log.go:172] (0xc002f332c0) (3) Data frame handling I0424 13:10:34.214895 6 log.go:172] (0xc001e68210) Data frame received for 5 I0424 13:10:34.214928 6 log.go:172] (0xc000a7e000) (5) Data frame handling I0424 13:10:34.216652 6 log.go:172] (0xc001e68210) Data frame received for 1 I0424 13:10:34.216689 6 log.go:172] (0xc002429f40) (1) Data frame handling I0424 13:10:34.216730 6 log.go:172] (0xc002429f40) (1) Data frame sent I0424 13:10:34.216762 6 log.go:172] (0xc001e68210) (0xc002429f40) Stream removed, broadcasting: 1 I0424 13:10:34.216819 6 log.go:172] (0xc001e68210) Go away received I0424 13:10:34.216921 6 log.go:172] (0xc001e68210) (0xc002429f40) Stream removed, broadcasting: 1 I0424 13:10:34.216942 6 log.go:172] (0xc001e68210) (0xc002f332c0) Stream removed, broadcasting: 3 I0424 13:10:34.216950 6 log.go:172] (0xc001e68210) (0xc000a7e000) Stream removed, broadcasting: 5 Apr 24 13:10:34.216: INFO: Waiting for endpoints: map[] Apr 24 13:10:34.220: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:8080/dial?request=hostName&protocol=udp&host=10.244.1.47&port=8081&tries=1'] Namespace:pod-network-test-3482 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:10:34.220: INFO: >>> kubeConfig: /root/.kube/config I0424 13:10:34.255785 6 log.go:172] (0xc00118a630) (0xc00117bc20) Create stream I0424 13:10:34.255807 6 log.go:172] (0xc00118a630) (0xc00117bc20) Stream added, broadcasting: 1 I0424 13:10:34.257709 6 log.go:172] (0xc00118a630) Reply frame received for 1 I0424 13:10:34.257768 6 log.go:172] (0xc00118a630) (0xc000a7e0a0) Create stream I0424 13:10:34.257785 6 log.go:172] (0xc00118a630) (0xc000a7e0a0) Stream added, broadcasting: 3 I0424 13:10:34.258952 6 log.go:172] (0xc00118a630) Reply frame received for 3 I0424 13:10:34.258984 6 log.go:172] (0xc00118a630) (0xc000a7e1e0) Create stream I0424 13:10:34.258995 6 log.go:172] (0xc00118a630) (0xc000a7e1e0) Stream added, broadcasting: 5 I0424 13:10:34.260061 6 log.go:172] (0xc00118a630) Reply frame received for 5 I0424 13:10:34.339098 6 log.go:172] (0xc00118a630) Data frame received for 3 I0424 13:10:34.339148 6 log.go:172] (0xc000a7e0a0) (3) Data frame handling I0424 13:10:34.339170 6 log.go:172] (0xc000a7e0a0) (3) Data frame sent I0424 13:10:34.339206 6 log.go:172] (0xc00118a630) Data frame received for 5 I0424 13:10:34.339226 6 log.go:172] (0xc000a7e1e0) (5) Data frame handling I0424 13:10:34.339756 6 log.go:172] (0xc00118a630) Data frame received for 3 I0424 13:10:34.339808 6 log.go:172] (0xc000a7e0a0) (3) Data frame handling I0424 13:10:34.340620 6 log.go:172] (0xc00118a630) Data frame received for 1 I0424 13:10:34.340648 6 log.go:172] (0xc00117bc20) (1) Data frame handling I0424 13:10:34.340668 6 log.go:172] (0xc00117bc20) (1) Data frame sent I0424 13:10:34.340681 6 log.go:172] (0xc00118a630) (0xc00117bc20) Stream removed, broadcasting: 1 I0424 13:10:34.340710 6 log.go:172] (0xc00118a630) Go away received I0424 13:10:34.340773 6 log.go:172] (0xc00118a630) (0xc00117bc20) Stream removed, broadcasting: 1 I0424 13:10:34.340791 6 log.go:172] (0xc00118a630) (0xc000a7e0a0) Stream removed, broadcasting: 3 I0424 13:10:34.340804 6 log.go:172] (0xc00118a630) (0xc000a7e1e0) Stream removed, broadcasting: 5 Apr 24 13:10:34.340: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:10:34.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3482" for this suite. Apr 24 13:10:58.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:10:58.431: INFO: namespace pod-network-test-3482 deletion completed in 24.086389818s • [SLOW TEST:48.589 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:10:58.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:10:58.518: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 24 13:10:58.523: INFO: Number of nodes with available pods: 0 Apr 24 13:10:58.523: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 24 13:10:58.595: INFO: Number of nodes with available pods: 0 Apr 24 13:10:58.595: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:10:59.644: INFO: Number of nodes with available pods: 0 Apr 24 13:10:59.644: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:00.599: INFO: Number of nodes with available pods: 0 Apr 24 13:11:00.599: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:01.600: INFO: Number of nodes with available pods: 1 Apr 24 13:11:01.600: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 24 13:11:01.629: INFO: Number of nodes with available pods: 1 Apr 24 13:11:01.629: INFO: Number of running nodes: 0, number of available pods: 1 Apr 24 13:11:02.634: INFO: Number of nodes with available pods: 0 Apr 24 13:11:02.634: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 24 13:11:02.650: INFO: Number of nodes with available pods: 0 Apr 24 13:11:02.650: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:03.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:03.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:04.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:04.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:05.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:05.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:06.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:06.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:07.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:07.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:08.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:08.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:09.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:09.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:10.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:10.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:11.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:11.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:12.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:12.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:13.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:13.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:14.655: INFO: Number of nodes with available pods: 0 Apr 24 13:11:14.655: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:11:15.655: INFO: Number of nodes with available pods: 1 Apr 24 13:11:15.655: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3336, will wait for the garbage collector to delete the pods Apr 24 13:11:15.721: INFO: Deleting DaemonSet.extensions daemon-set took: 6.183305ms Apr 24 13:11:16.021: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.286396ms Apr 24 13:11:22.225: INFO: Number of nodes with available pods: 0 Apr 24 13:11:22.225: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 13:11:22.228: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3336/daemonsets","resourceVersion":"7177364"},"items":null} Apr 24 13:11:22.230: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3336/pods","resourceVersion":"7177364"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:11:22.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3336" for this suite. Apr 24 13:11:28.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:11:28.401: INFO: namespace daemonsets-3336 deletion completed in 6.094589555s • [SLOW TEST:29.970 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:11:28.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0424 13:11:38.480599 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 13:11:38.480: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:11:38.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4561" for this suite. Apr 24 13:11:44.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:11:44.566: INFO: namespace gc-4561 deletion completed in 6.082972404s • [SLOW TEST:16.164 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:11:44.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2cd37a14-e014-40a1-b3f3-80a5174b5a77 STEP: Creating a pod to test consume configMaps Apr 24 13:11:44.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72" in namespace "projected-7941" to be "success or failure" Apr 24 13:11:44.680: INFO: Pod "pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72": Phase="Pending", Reason="", readiness=false. Elapsed: 44.664941ms Apr 24 13:11:46.683: INFO: Pod "pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048018764s Apr 24 13:11:48.687: INFO: Pod "pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0518691s STEP: Saw pod success Apr 24 13:11:48.687: INFO: Pod "pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72" satisfied condition "success or failure" Apr 24 13:11:48.690: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72 container projected-configmap-volume-test: STEP: delete the pod Apr 24 13:11:48.734: INFO: Waiting for pod pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72 to disappear Apr 24 13:11:48.763: INFO: Pod pod-projected-configmaps-80d40ea6-aea5-41c8-a46b-b75c84420a72 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:11:48.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7941" for this suite. Apr 24 13:11:54.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:11:54.862: INFO: namespace projected-7941 deletion completed in 6.095295355s • [SLOW TEST:10.296 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:11:54.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 24 13:11:58.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-dd4df1e9-cf5d-447d-8982-e68373f4c109 -c busybox-main-container --namespace=emptydir-9922 -- cat /usr/share/volumeshare/shareddata.txt' Apr 24 13:12:02.360: INFO: stderr: "I0424 13:12:02.274150 317 log.go:172] (0xc000b66420) (0xc000b90960) Create stream\nI0424 13:12:02.274190 317 log.go:172] (0xc000b66420) (0xc000b90960) Stream added, broadcasting: 1\nI0424 13:12:02.276231 317 log.go:172] (0xc000b66420) Reply frame received for 1\nI0424 13:12:02.276271 317 log.go:172] (0xc000b66420) (0xc000b740a0) Create stream\nI0424 13:12:02.276281 317 log.go:172] (0xc000b66420) (0xc000b740a0) Stream added, broadcasting: 3\nI0424 13:12:02.277067 317 log.go:172] (0xc000b66420) Reply frame received for 3\nI0424 13:12:02.277103 317 log.go:172] (0xc000b66420) (0xc000b4e000) Create stream\nI0424 13:12:02.277206 317 log.go:172] (0xc000b66420) (0xc000b4e000) Stream added, broadcasting: 5\nI0424 13:12:02.277903 317 log.go:172] (0xc000b66420) Reply frame received for 5\nI0424 13:12:02.354572 317 log.go:172] (0xc000b66420) Data frame received for 3\nI0424 13:12:02.354593 317 log.go:172] (0xc000b740a0) (3) Data frame handling\nI0424 13:12:02.354606 317 log.go:172] (0xc000b740a0) (3) Data frame sent\nI0424 13:12:02.354614 317 log.go:172] (0xc000b66420) Data frame received for 3\nI0424 13:12:02.354619 317 log.go:172] (0xc000b740a0) (3) Data frame handling\nI0424 13:12:02.354646 317 log.go:172] (0xc000b66420) Data frame received for 5\nI0424 13:12:02.354659 317 log.go:172] (0xc000b4e000) (5) Data frame handling\nI0424 13:12:02.356093 317 log.go:172] (0xc000b66420) Data frame received for 1\nI0424 13:12:02.356104 317 log.go:172] (0xc000b90960) (1) Data frame handling\nI0424 13:12:02.356112 317 log.go:172] (0xc000b90960) (1) Data frame sent\nI0424 13:12:02.356121 317 log.go:172] (0xc000b66420) (0xc000b90960) Stream removed, broadcasting: 1\nI0424 13:12:02.356167 317 log.go:172] (0xc000b66420) Go away received\nI0424 13:12:02.356367 317 log.go:172] (0xc000b66420) (0xc000b90960) Stream removed, broadcasting: 1\nI0424 13:12:02.356379 317 log.go:172] (0xc000b66420) (0xc000b740a0) Stream removed, broadcasting: 3\nI0424 13:12:02.356386 317 log.go:172] (0xc000b66420) (0xc000b4e000) Stream removed, broadcasting: 5\n" Apr 24 13:12:02.361: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:12:02.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9922" for this suite. Apr 24 13:12:08.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:12:08.459: INFO: namespace emptydir-9922 deletion completed in 6.095090596s • [SLOW TEST:13.597 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:12:08.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-8518 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8518 to expose endpoints map[] Apr 24 13:12:08.556: INFO: Get endpoints failed (10.952663ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 24 13:12:09.560: INFO: successfully validated that service endpoint-test2 in namespace services-8518 exposes endpoints map[] (1.014912422s elapsed) STEP: Creating pod pod1 in namespace services-8518 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8518 to expose endpoints map[pod1:[80]] Apr 24 13:12:12.685: INFO: successfully validated that service endpoint-test2 in namespace services-8518 exposes endpoints map[pod1:[80]] (3.118279626s elapsed) STEP: Creating pod pod2 in namespace services-8518 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8518 to expose endpoints map[pod1:[80] pod2:[80]] Apr 24 13:12:15.770: INFO: successfully validated that service endpoint-test2 in namespace services-8518 exposes endpoints map[pod1:[80] pod2:[80]] (3.081518746s elapsed) STEP: Deleting pod pod1 in namespace services-8518 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8518 to expose endpoints map[pod2:[80]] Apr 24 13:12:17.157: INFO: successfully validated that service endpoint-test2 in namespace services-8518 exposes endpoints map[pod2:[80]] (1.382124875s elapsed) STEP: Deleting pod pod2 in namespace services-8518 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8518 to expose endpoints map[] Apr 24 13:12:17.293: INFO: successfully validated that service endpoint-test2 in namespace services-8518 exposes endpoints map[] (12.267862ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:12:17.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8518" for this suite. Apr 24 13:12:23.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:12:23.455: INFO: namespace services-8518 deletion completed in 6.101580445s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:14.995 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:12:23.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2pj7l in namespace proxy-3345 I0424 13:12:23.585855 6 runners.go:180] Created replication controller with name: proxy-service-2pj7l, namespace: proxy-3345, replica count: 1 I0424 13:12:24.636225 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 13:12:25.636426 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 13:12:26.636658 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 13:12:27.636872 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 13:12:28.637049 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 13:12:29.637290 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 13:12:30.637540 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 13:12:31.637742 6 runners.go:180] proxy-service-2pj7l Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 13:12:31.641: INFO: setup took 8.10213743s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 24 13:12:31.647: INFO: (0) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 5.947811ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 7.289759ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 7.16804ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 7.175326ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 7.205095ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 7.419834ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 7.265519ms) Apr 24 13:12:31.648: INFO: (0) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 7.26976ms) Apr 24 13:12:31.649: INFO: (0) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 8.332031ms) Apr 24 13:12:31.649: INFO: (0) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 8.411333ms) Apr 24 13:12:31.649: INFO: (0) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 8.280637ms) Apr 24 13:12:31.654: INFO: (0) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 12.652405ms) Apr 24 13:12:31.656: INFO: (0) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 14.866069ms) Apr 24 13:12:31.656: INFO: (0) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 14.924868ms) Apr 24 13:12:31.659: INFO: (0) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 18.087729ms) Apr 24 13:12:31.659: INFO: (0) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test<... (200; 2.811308ms) Apr 24 13:12:31.662: INFO: (1) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 2.789912ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 5.28327ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 5.364261ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 5.410719ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 5.361226ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 5.444658ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 5.41194ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 5.445812ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 5.545934ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 5.521731ms) Apr 24 13:12:31.665: INFO: (1) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test (200; 4.098387ms) Apr 24 13:12:31.669: INFO: (2) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 4.188329ms) Apr 24 13:12:31.669: INFO: (2) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.149587ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 4.465979ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 4.497358ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 4.589783ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 4.807575ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 4.823093ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 4.989274ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.903691ms) Apr 24 13:12:31.670: INFO: (2) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 4.991315ms) Apr 24 13:12:31.675: INFO: (3) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 4.819085ms) Apr 24 13:12:31.675: INFO: (3) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.858837ms) Apr 24 13:12:31.675: INFO: (3) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.999405ms) Apr 24 13:12:31.675: INFO: (3) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.047481ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 5.462192ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.606994ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 5.620616ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 5.635795ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 5.737077ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 5.722571ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 5.706688ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 5.766979ms) Apr 24 13:12:31.676: INFO: (3) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test (200; 2.093357ms) Apr 24 13:12:31.681: INFO: (4) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.88385ms) Apr 24 13:12:31.681: INFO: (4) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.934921ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 4.948507ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 5.132611ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 5.184866ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 5.03401ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.219214ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 5.384508ms) Apr 24 13:12:31.682: INFO: (4) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test<... (200; 3.998936ms) Apr 24 13:12:31.686: INFO: (5) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 3.908063ms) Apr 24 13:12:31.687: INFO: (5) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 4.091836ms) Apr 24 13:12:31.687: INFO: (5) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 4.155991ms) Apr 24 13:12:31.687: INFO: (5) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 4.382543ms) Apr 24 13:12:31.687: INFO: (5) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.29871ms) Apr 24 13:12:31.687: INFO: (5) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 4.360082ms) Apr 24 13:12:31.687: INFO: (5) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.389601ms) Apr 24 13:12:31.688: INFO: (5) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 5.561946ms) Apr 24 13:12:31.688: INFO: (5) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 5.493127ms) Apr 24 13:12:31.688: INFO: (5) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 5.553034ms) Apr 24 13:12:31.688: INFO: (5) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 5.614947ms) Apr 24 13:12:31.689: INFO: (5) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 6.294281ms) Apr 24 13:12:31.689: INFO: (5) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 6.50748ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.380056ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.597445ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 3.721199ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 3.846881ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 4.187065ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 4.13865ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 4.092357ms) Apr 24 13:12:31.693: INFO: (6) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 4.186466ms) Apr 24 13:12:31.694: INFO: (6) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.37314ms) Apr 24 13:12:31.694: INFO: (6) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 4.455088ms) Apr 24 13:12:31.694: INFO: (6) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.427872ms) Apr 24 13:12:31.694: INFO: (6) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.430882ms) Apr 24 13:12:31.694: INFO: (6) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 4.452842ms) Apr 24 13:12:31.694: INFO: (6) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 5.013053ms) Apr 24 13:12:31.697: INFO: (7) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 2.961278ms) Apr 24 13:12:31.698: INFO: (7) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 3.375203ms) Apr 24 13:12:31.698: INFO: (7) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 3.583244ms) Apr 24 13:12:31.698: INFO: (7) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.725301ms) Apr 24 13:12:31.699: INFO: (7) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 4.363877ms) Apr 24 13:12:31.699: INFO: (7) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 4.39247ms) Apr 24 13:12:31.699: INFO: (7) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 4.480558ms) Apr 24 13:12:31.699: INFO: (7) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.661192ms) Apr 24 13:12:31.699: INFO: (7) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.638305ms) Apr 24 13:12:31.699: INFO: (7) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 2.528105ms) Apr 24 13:12:31.702: INFO: (8) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 2.607847ms) Apr 24 13:12:31.703: INFO: (8) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test<... (200; 5.502068ms) Apr 24 13:12:31.708: INFO: (9) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 2.675344ms) Apr 24 13:12:31.708: INFO: (9) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 3.718117ms) Apr 24 13:12:31.709: INFO: (9) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 3.740384ms) Apr 24 13:12:31.709: INFO: (9) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 3.778205ms) Apr 24 13:12:31.709: INFO: (9) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 3.875193ms) Apr 24 13:12:31.710: INFO: (9) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.545878ms) Apr 24 13:12:31.710: INFO: (9) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 4.629798ms) Apr 24 13:12:31.710: INFO: (9) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 4.574634ms) Apr 24 13:12:31.710: INFO: (9) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.594273ms) Apr 24 13:12:31.710: INFO: (9) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 4.619572ms) Apr 24 13:12:31.710: INFO: (9) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.773396ms) Apr 24 13:12:31.713: INFO: (10) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.359084ms) Apr 24 13:12:31.713: INFO: (10) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 3.477765ms) Apr 24 13:12:31.713: INFO: (10) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.482295ms) Apr 24 13:12:31.713: INFO: (10) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.516417ms) Apr 24 13:12:31.714: INFO: (10) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.668226ms) Apr 24 13:12:31.714: INFO: (10) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 3.598665ms) Apr 24 13:12:31.714: INFO: (10) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 3.607505ms) Apr 24 13:12:31.714: INFO: (10) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 3.670013ms) Apr 24 13:12:31.714: INFO: (10) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 3.648013ms) Apr 24 13:12:31.714: INFO: (10) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 3.936275ms) Apr 24 13:12:31.719: INFO: (11) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 3.968665ms) Apr 24 13:12:31.719: INFO: (11) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 3.928019ms) Apr 24 13:12:31.719: INFO: (11) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.914562ms) Apr 24 13:12:31.719: INFO: (11) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test<... (200; 3.945724ms) Apr 24 13:12:31.719: INFO: (11) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.0227ms) Apr 24 13:12:31.719: INFO: (11) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 4.070844ms) Apr 24 13:12:31.720: INFO: (11) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 4.726487ms) Apr 24 13:12:31.720: INFO: (11) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.758217ms) Apr 24 13:12:31.720: INFO: (11) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.732056ms) Apr 24 13:12:31.720: INFO: (11) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.818901ms) Apr 24 13:12:31.723: INFO: (12) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 3.079214ms) Apr 24 13:12:31.723: INFO: (12) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.045722ms) Apr 24 13:12:31.723: INFO: (12) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.058079ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 4.219728ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.17883ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 4.261673ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 4.207594ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 4.331277ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 4.390455ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.439038ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 4.640083ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 4.563879ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.576155ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.572116ms) Apr 24 13:12:31.724: INFO: (12) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test (200; 2.97984ms) Apr 24 13:12:31.728: INFO: (13) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 3.418311ms) Apr 24 13:12:31.728: INFO: (13) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.553182ms) Apr 24 13:12:31.728: INFO: (13) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 3.549563ms) Apr 24 13:12:31.728: INFO: (13) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 3.715735ms) Apr 24 13:12:31.728: INFO: (13) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 3.714239ms) Apr 24 13:12:31.728: INFO: (13) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.685161ms) Apr 24 13:12:31.730: INFO: (13) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 5.151825ms) Apr 24 13:12:31.730: INFO: (13) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 5.245694ms) Apr 24 13:12:31.730: INFO: (13) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 5.30815ms) Apr 24 13:12:31.730: INFO: (13) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 5.266176ms) Apr 24 13:12:31.730: INFO: (13) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 5.501157ms) Apr 24 13:12:31.730: INFO: (13) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 5.427351ms) Apr 24 13:12:31.733: INFO: (14) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test (200; 3.221599ms) Apr 24 13:12:31.733: INFO: (14) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 3.326104ms) Apr 24 13:12:31.733: INFO: (14) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 3.269628ms) Apr 24 13:12:31.733: INFO: (14) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 3.555183ms) Apr 24 13:12:31.733: INFO: (14) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 3.528714ms) Apr 24 13:12:31.734: INFO: (14) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.769833ms) Apr 24 13:12:31.734: INFO: (14) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.250832ms) Apr 24 13:12:31.734: INFO: (14) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.342491ms) Apr 24 13:12:31.734: INFO: (14) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 4.410712ms) Apr 24 13:12:31.734: INFO: (14) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 4.422169ms) Apr 24 13:12:31.734: INFO: (14) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 4.474747ms) Apr 24 13:12:31.735: INFO: (14) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.659896ms) Apr 24 13:12:31.742: INFO: (15) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 7.771354ms) Apr 24 13:12:31.742: INFO: (15) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 7.822505ms) Apr 24 13:12:31.743: INFO: (15) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 7.899092ms) Apr 24 13:12:31.743: INFO: (15) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 7.908488ms) Apr 24 13:12:31.743: INFO: (15) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 7.96591ms) Apr 24 13:12:31.743: INFO: (15) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 7.973655ms) Apr 24 13:12:31.744: INFO: (15) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 8.859844ms) Apr 24 13:12:31.744: INFO: (15) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 9.103756ms) Apr 24 13:12:31.744: INFO: (15) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 9.144923ms) Apr 24 13:12:31.744: INFO: (15) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test<... (200; 9.738373ms) Apr 24 13:12:31.745: INFO: (15) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 9.832592ms) Apr 24 13:12:31.745: INFO: (15) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 9.819551ms) Apr 24 13:12:31.745: INFO: (15) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 9.80053ms) Apr 24 13:12:31.745: INFO: (15) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 9.908605ms) Apr 24 13:12:31.745: INFO: (15) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 10.076506ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.054223ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 5.160046ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 5.184702ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 5.103348ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 5.066994ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 5.237446ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.086686ms) Apr 24 13:12:31.750: INFO: (16) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 5.095179ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 7.117472ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 7.091618ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 7.132562ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 7.206224ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 7.166753ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 7.264427ms) Apr 24 13:12:31.752: INFO: (16) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 7.507703ms) Apr 24 13:12:31.755: INFO: (17) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 2.958195ms) Apr 24 13:12:31.755: INFO: (17) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 3.075074ms) Apr 24 13:12:31.756: INFO: (17) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.22687ms) Apr 24 13:12:31.756: INFO: (17) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 3.59658ms) Apr 24 13:12:31.756: INFO: (17) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 3.656808ms) Apr 24 13:12:31.756: INFO: (17) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 3.790535ms) Apr 24 13:12:31.756: INFO: (17) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.86489ms) Apr 24 13:12:31.756: INFO: (17) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 4.047142ms) Apr 24 13:12:31.757: INFO: (17) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test (200; 4.816799ms) Apr 24 13:12:31.757: INFO: (17) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 4.952522ms) Apr 24 13:12:31.757: INFO: (17) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 4.849058ms) Apr 24 13:12:31.757: INFO: (17) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 4.862838ms) Apr 24 13:12:31.758: INFO: (17) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 5.272031ms) Apr 24 13:12:31.761: INFO: (18) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 3.626316ms) Apr 24 13:12:31.761: INFO: (18) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 3.730969ms) Apr 24 13:12:31.762: INFO: (18) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: ... (200; 3.864826ms) Apr 24 13:12:31.762: INFO: (18) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.10262ms) Apr 24 13:12:31.762: INFO: (18) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 4.269032ms) Apr 24 13:12:31.762: INFO: (18) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 4.196797ms) Apr 24 13:12:31.762: INFO: (18) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 4.295769ms) Apr 24 13:12:31.762: INFO: (18) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:1080/proxy/: test<... (200; 4.284936ms) Apr 24 13:12:31.763: INFO: (18) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 4.943317ms) Apr 24 13:12:31.763: INFO: (18) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 5.183394ms) Apr 24 13:12:31.763: INFO: (18) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 5.23216ms) Apr 24 13:12:31.763: INFO: (18) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 5.220982ms) Apr 24 13:12:31.763: INFO: (18) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 5.221174ms) Apr 24 13:12:31.763: INFO: (18) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 5.183317ms) Apr 24 13:12:31.766: INFO: (19) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:460/proxy/: tls baz (200; 2.399821ms) Apr 24 13:12:31.767: INFO: (19) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:443/proxy/: test<... (200; 3.644422ms) Apr 24 13:12:31.767: INFO: (19) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.636731ms) Apr 24 13:12:31.767: INFO: (19) /api/v1/namespaces/proxy-3345/pods/https:proxy-service-2pj7l-9svq8:462/proxy/: tls qux (200; 3.679425ms) Apr 24 13:12:31.767: INFO: (19) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:162/proxy/: bar (200; 3.808682ms) Apr 24 13:12:31.767: INFO: (19) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname2/proxy/: bar (200; 3.730303ms) Apr 24 13:12:31.767: INFO: (19) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname1/proxy/: foo (200; 3.947855ms) Apr 24 13:12:31.768: INFO: (19) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname2/proxy/: tls qux (200; 4.750668ms) Apr 24 13:12:31.768: INFO: (19) /api/v1/namespaces/proxy-3345/services/http:proxy-service-2pj7l:portname1/proxy/: foo (200; 5.059958ms) Apr 24 13:12:31.768: INFO: (19) /api/v1/namespaces/proxy-3345/services/proxy-service-2pj7l:portname2/proxy/: bar (200; 5.103791ms) Apr 24 13:12:31.768: INFO: (19) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.224458ms) Apr 24 13:12:31.768: INFO: (19) /api/v1/namespaces/proxy-3345/services/https:proxy-service-2pj7l:tlsportname1/proxy/: tls baz (200; 5.285762ms) Apr 24 13:12:31.769: INFO: (19) /api/v1/namespaces/proxy-3345/pods/http:proxy-service-2pj7l-9svq8:1080/proxy/: ... (200; 5.389097ms) Apr 24 13:12:31.769: INFO: (19) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8:160/proxy/: foo (200; 5.686729ms) Apr 24 13:12:31.769: INFO: (19) /api/v1/namespaces/proxy-3345/pods/proxy-service-2pj7l-9svq8/proxy/: test (200; 5.738851ms) STEP: deleting ReplicationController proxy-service-2pj7l in namespace proxy-3345, will wait for the garbage collector to delete the pods Apr 24 13:12:31.826: INFO: Deleting ReplicationController proxy-service-2pj7l took: 5.865766ms Apr 24 13:12:32.127: INFO: Terminating ReplicationController proxy-service-2pj7l pods took: 300.264925ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:12:34.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3345" for this suite. Apr 24 13:12:40.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:12:40.525: INFO: namespace proxy-3345 deletion completed in 6.089205768s • [SLOW TEST:17.070 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:12:40.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-fpx6 STEP: Creating a pod to test atomic-volume-subpath Apr 24 13:12:40.641: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fpx6" in namespace "subpath-8927" to be "success or failure" Apr 24 13:12:40.646: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.841273ms Apr 24 13:12:42.650: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009795392s Apr 24 13:12:44.655: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 4.014382873s Apr 24 13:12:46.660: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 6.01895926s Apr 24 13:12:48.664: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 8.023544575s Apr 24 13:12:50.668: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 10.027707541s Apr 24 13:12:52.672: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 12.031759531s Apr 24 13:12:54.676: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 14.035601668s Apr 24 13:12:56.681: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 16.039902315s Apr 24 13:12:58.684: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 18.043540281s Apr 24 13:13:00.688: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 20.047820735s Apr 24 13:13:02.693: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Running", Reason="", readiness=true. Elapsed: 22.052810672s Apr 24 13:13:04.698: INFO: Pod "pod-subpath-test-downwardapi-fpx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057492448s STEP: Saw pod success Apr 24 13:13:04.698: INFO: Pod "pod-subpath-test-downwardapi-fpx6" satisfied condition "success or failure" Apr 24 13:13:04.702: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-fpx6 container test-container-subpath-downwardapi-fpx6: STEP: delete the pod Apr 24 13:13:04.773: INFO: Waiting for pod pod-subpath-test-downwardapi-fpx6 to disappear Apr 24 13:13:04.777: INFO: Pod pod-subpath-test-downwardapi-fpx6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fpx6 Apr 24 13:13:04.777: INFO: Deleting pod "pod-subpath-test-downwardapi-fpx6" in namespace "subpath-8927" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:13:04.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8927" for this suite. Apr 24 13:13:10.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:13:10.869: INFO: namespace subpath-8927 deletion completed in 6.087368199s • [SLOW TEST:30.344 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:13:10.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 24 13:13:10.952: INFO: Waiting up to 5m0s for pod "var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811" in namespace "var-expansion-1418" to be "success or failure" Apr 24 13:13:10.963: INFO: Pod "var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811": Phase="Pending", Reason="", readiness=false. Elapsed: 11.215902ms Apr 24 13:13:12.967: INFO: Pod "var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015382434s Apr 24 13:13:14.972: INFO: Pod "var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020067111s STEP: Saw pod success Apr 24 13:13:14.972: INFO: Pod "var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811" satisfied condition "success or failure" Apr 24 13:13:14.976: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811 container dapi-container: STEP: delete the pod Apr 24 13:13:15.001: INFO: Waiting for pod var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811 to disappear Apr 24 13:13:15.005: INFO: Pod var-expansion-bb37689b-f161-46ff-9e1a-df9cd50ea811 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:13:15.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1418" for this suite. Apr 24 13:13:21.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:13:21.103: INFO: namespace var-expansion-1418 deletion completed in 6.095845411s • [SLOW TEST:10.234 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:13:21.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-af5c990b-9936-4b59-af70-bacb43951c61 STEP: Creating a pod to test consume secrets Apr 24 13:13:21.220: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc" in namespace "projected-8344" to be "success or failure" Apr 24 13:13:21.222: INFO: Pod "pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.594929ms Apr 24 13:13:23.227: INFO: Pod "pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006762166s Apr 24 13:13:25.230: INFO: Pod "pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010346827s STEP: Saw pod success Apr 24 13:13:25.230: INFO: Pod "pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc" satisfied condition "success or failure" Apr 24 13:13:25.232: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc container projected-secret-volume-test: STEP: delete the pod Apr 24 13:13:25.253: INFO: Waiting for pod pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc to disappear Apr 24 13:13:25.258: INFO: Pod pod-projected-secrets-e97b2c27-9c81-4d83-ba14-46c72c36abbc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:13:25.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8344" for this suite. Apr 24 13:13:31.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:13:31.366: INFO: namespace projected-8344 deletion completed in 6.104819743s • [SLOW TEST:10.262 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:13:31.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 24 13:13:31.458: INFO: Waiting up to 5m0s for pod "client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18" in namespace "containers-8753" to be "success or failure" Apr 24 13:13:31.475: INFO: Pod "client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18": Phase="Pending", Reason="", readiness=false. Elapsed: 16.338967ms Apr 24 13:13:33.479: INFO: Pod "client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020227683s Apr 24 13:13:35.482: INFO: Pod "client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023903924s STEP: Saw pod success Apr 24 13:13:35.482: INFO: Pod "client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18" satisfied condition "success or failure" Apr 24 13:13:35.485: INFO: Trying to get logs from node iruya-worker2 pod client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18 container test-container: STEP: delete the pod Apr 24 13:13:35.499: INFO: Waiting for pod client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18 to disappear Apr 24 13:13:35.504: INFO: Pod client-containers-45ecd863-e492-4936-a3e7-e415cfe1de18 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:13:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8753" for this suite. Apr 24 13:13:41.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:13:41.592: INFO: namespace containers-8753 deletion completed in 6.084927071s • [SLOW TEST:10.226 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:13:41.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-524eceb7-541b-4f9e-8f1c-0832d873763f in namespace container-probe-119 Apr 24 13:13:45.670: INFO: Started pod liveness-524eceb7-541b-4f9e-8f1c-0832d873763f in namespace container-probe-119 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 13:13:45.676: INFO: Initial restart count of pod liveness-524eceb7-541b-4f9e-8f1c-0832d873763f is 0 Apr 24 13:14:01.901: INFO: Restart count of pod container-probe-119/liveness-524eceb7-541b-4f9e-8f1c-0832d873763f is now 1 (16.225091552s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:14:01.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-119" for this suite. Apr 24 13:14:07.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:14:08.055: INFO: namespace container-probe-119 deletion completed in 6.097546038s • [SLOW TEST:26.463 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:14:08.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:14:08.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1" in namespace "downward-api-1363" to be "success or failure" Apr 24 13:14:08.176: INFO: Pod "downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.703706ms Apr 24 13:14:10.180: INFO: Pod "downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022664995s Apr 24 13:14:12.184: INFO: Pod "downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026636316s STEP: Saw pod success Apr 24 13:14:12.184: INFO: Pod "downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1" satisfied condition "success or failure" Apr 24 13:14:12.187: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1 container client-container: STEP: delete the pod Apr 24 13:14:12.207: INFO: Waiting for pod downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1 to disappear Apr 24 13:14:12.211: INFO: Pod downwardapi-volume-34d034d5-e1ca-4e67-ac5a-6e9429262ee1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:14:12.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1363" for this suite. Apr 24 13:14:18.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:14:18.313: INFO: namespace downward-api-1363 deletion completed in 6.098136817s • [SLOW TEST:10.257 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:14:18.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7330 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 13:14:18.375: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 13:14:46.614: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.3 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7330 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:14:46.614: INFO: >>> kubeConfig: /root/.kube/config I0424 13:14:46.647662 6 log.go:172] (0xc002682210) (0xc002429cc0) Create stream I0424 13:14:46.647783 6 log.go:172] (0xc002682210) (0xc002429cc0) Stream added, broadcasting: 1 I0424 13:14:46.650514 6 log.go:172] (0xc002682210) Reply frame received for 1 I0424 13:14:46.650579 6 log.go:172] (0xc002682210) (0xc0024a2c80) Create stream I0424 13:14:46.650597 6 log.go:172] (0xc002682210) (0xc0024a2c80) Stream added, broadcasting: 3 I0424 13:14:46.651767 6 log.go:172] (0xc002682210) Reply frame received for 3 I0424 13:14:46.651803 6 log.go:172] (0xc002682210) (0xc002429d60) Create stream I0424 13:14:46.651815 6 log.go:172] (0xc002682210) (0xc002429d60) Stream added, broadcasting: 5 I0424 13:14:46.653037 6 log.go:172] (0xc002682210) Reply frame received for 5 I0424 13:14:47.724607 6 log.go:172] (0xc002682210) Data frame received for 5 I0424 13:14:47.724644 6 log.go:172] (0xc002429d60) (5) Data frame handling I0424 13:14:47.724681 6 log.go:172] (0xc002682210) Data frame received for 3 I0424 13:14:47.724728 6 log.go:172] (0xc0024a2c80) (3) Data frame handling I0424 13:14:47.724758 6 log.go:172] (0xc0024a2c80) (3) Data frame sent I0424 13:14:47.724782 6 log.go:172] (0xc002682210) Data frame received for 3 I0424 13:14:47.724806 6 log.go:172] (0xc0024a2c80) (3) Data frame handling I0424 13:14:47.727139 6 log.go:172] (0xc002682210) Data frame received for 1 I0424 13:14:47.727168 6 log.go:172] (0xc002429cc0) (1) Data frame handling I0424 13:14:47.727180 6 log.go:172] (0xc002429cc0) (1) Data frame sent I0424 13:14:47.727205 6 log.go:172] (0xc002682210) (0xc002429cc0) Stream removed, broadcasting: 1 I0424 13:14:47.727229 6 log.go:172] (0xc002682210) Go away received I0424 13:14:47.727367 6 log.go:172] (0xc002682210) (0xc002429cc0) Stream removed, broadcasting: 1 I0424 13:14:47.727391 6 log.go:172] (0xc002682210) (0xc0024a2c80) Stream removed, broadcasting: 3 I0424 13:14:47.727413 6 log.go:172] (0xc002682210) (0xc002429d60) Stream removed, broadcasting: 5 Apr 24 13:14:47.727: INFO: Found all expected endpoints: [netserver-0] Apr 24 13:14:47.731: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.55 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7330 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:14:47.731: INFO: >>> kubeConfig: /root/.kube/config I0424 13:14:47.761961 6 log.go:172] (0xc0021529a0) (0xc00117be00) Create stream I0424 13:14:47.761988 6 log.go:172] (0xc0021529a0) (0xc00117be00) Stream added, broadcasting: 1 I0424 13:14:47.764285 6 log.go:172] (0xc0021529a0) Reply frame received for 1 I0424 13:14:47.764311 6 log.go:172] (0xc0021529a0) (0xc0013b50e0) Create stream I0424 13:14:47.764319 6 log.go:172] (0xc0021529a0) (0xc0013b50e0) Stream added, broadcasting: 3 I0424 13:14:47.765093 6 log.go:172] (0xc0021529a0) Reply frame received for 3 I0424 13:14:47.765392 6 log.go:172] (0xc0021529a0) (0xc00117bea0) Create stream I0424 13:14:47.765406 6 log.go:172] (0xc0021529a0) (0xc00117bea0) Stream added, broadcasting: 5 I0424 13:14:47.766323 6 log.go:172] (0xc0021529a0) Reply frame received for 5 I0424 13:14:48.836109 6 log.go:172] (0xc0021529a0) Data frame received for 3 I0424 13:14:48.836144 6 log.go:172] (0xc0013b50e0) (3) Data frame handling I0424 13:14:48.836174 6 log.go:172] (0xc0013b50e0) (3) Data frame sent I0424 13:14:48.836196 6 log.go:172] (0xc0021529a0) Data frame received for 3 I0424 13:14:48.836211 6 log.go:172] (0xc0013b50e0) (3) Data frame handling I0424 13:14:48.836251 6 log.go:172] (0xc0021529a0) Data frame received for 5 I0424 13:14:48.836278 6 log.go:172] (0xc00117bea0) (5) Data frame handling I0424 13:14:48.838018 6 log.go:172] (0xc0021529a0) Data frame received for 1 I0424 13:14:48.838044 6 log.go:172] (0xc00117be00) (1) Data frame handling I0424 13:14:48.838086 6 log.go:172] (0xc00117be00) (1) Data frame sent I0424 13:14:48.838107 6 log.go:172] (0xc0021529a0) (0xc00117be00) Stream removed, broadcasting: 1 I0424 13:14:48.838128 6 log.go:172] (0xc0021529a0) Go away received I0424 13:14:48.838198 6 log.go:172] (0xc0021529a0) (0xc00117be00) Stream removed, broadcasting: 1 I0424 13:14:48.838216 6 log.go:172] (0xc0021529a0) (0xc0013b50e0) Stream removed, broadcasting: 3 I0424 13:14:48.838230 6 log.go:172] (0xc0021529a0) (0xc00117bea0) Stream removed, broadcasting: 5 Apr 24 13:14:48.838: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:14:48.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7330" for this suite. Apr 24 13:15:12.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:15:12.947: INFO: namespace pod-network-test-7330 deletion completed in 24.104884865s • [SLOW TEST:54.634 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:15:12.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-27ad26f1-7d48-4dea-b045-4a1d6cf2e2bb STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:15:17.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3070" for this suite. Apr 24 13:15:39.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:15:39.264: INFO: namespace configmap-3070 deletion completed in 22.086955025s • [SLOW TEST:26.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:15:39.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-94e135ad-fadc-4ce1-8c26-d29a7adadd6f STEP: Creating a pod to test consume configMaps Apr 24 13:15:39.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36" in namespace "projected-3592" to be "success or failure" Apr 24 13:15:39.330: INFO: Pod "pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938335ms Apr 24 13:15:41.334: INFO: Pod "pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007256708s Apr 24 13:15:43.338: INFO: Pod "pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011820848s STEP: Saw pod success Apr 24 13:15:43.338: INFO: Pod "pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36" satisfied condition "success or failure" Apr 24 13:15:43.341: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36 container projected-configmap-volume-test: STEP: delete the pod Apr 24 13:15:43.367: INFO: Waiting for pod pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36 to disappear Apr 24 13:15:43.372: INFO: Pod pod-projected-configmaps-4acf2107-f3a5-4965-bf71-5061da6beb36 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:15:43.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3592" for this suite. Apr 24 13:15:49.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:15:49.463: INFO: namespace projected-3592 deletion completed in 6.088273806s • [SLOW TEST:10.198 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:15:49.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-dedd05e3-d0da-40c2-bca7-ff360510f4ee STEP: Creating a pod to test consume configMaps Apr 24 13:15:49.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b" in namespace "configmap-205" to be "success or failure" Apr 24 13:15:49.549: INFO: Pod "pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211413ms Apr 24 13:15:51.552: INFO: Pod "pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01331609s Apr 24 13:15:53.556: INFO: Pod "pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017851852s STEP: Saw pod success Apr 24 13:15:53.556: INFO: Pod "pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b" satisfied condition "success or failure" Apr 24 13:15:53.559: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b container configmap-volume-test: STEP: delete the pod Apr 24 13:15:53.626: INFO: Waiting for pod pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b to disappear Apr 24 13:15:53.650: INFO: Pod pod-configmaps-d418d842-69ab-45ee-9756-0cb0cfd6a47b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:15:53.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-205" for this suite. Apr 24 13:15:59.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:15:59.747: INFO: namespace configmap-205 deletion completed in 6.092679802s • [SLOW TEST:10.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:15:59.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:16:03.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3622" for this suite. Apr 24 13:16:53.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:16:53.998: INFO: namespace kubelet-test-3622 deletion completed in 50.121087467s • [SLOW TEST:54.251 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:16:53.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-68bd2f3d-23d4-4ce2-8409-d84e59877f86 STEP: Creating a pod to test consume secrets Apr 24 13:16:54.115: INFO: Waiting up to 5m0s for pod "pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205" in namespace "secrets-5644" to be "success or failure" Apr 24 13:16:54.118: INFO: Pod "pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811956ms Apr 24 13:16:56.123: INFO: Pod "pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007169705s Apr 24 13:16:58.127: INFO: Pod "pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011755091s STEP: Saw pod success Apr 24 13:16:58.127: INFO: Pod "pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205" satisfied condition "success or failure" Apr 24 13:16:58.131: INFO: Trying to get logs from node iruya-worker pod pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205 container secret-volume-test: STEP: delete the pod Apr 24 13:16:58.144: INFO: Waiting for pod pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205 to disappear Apr 24 13:16:58.149: INFO: Pod pod-secrets-e920b3e7-3bb3-4153-ab3d-1bf4ae9ff205 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:16:58.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5644" for this suite. Apr 24 13:17:04.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:17:04.265: INFO: namespace secrets-5644 deletion completed in 6.111998682s • [SLOW TEST:10.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:17:04.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1dbb66a8-d9e7-41be-beb4-18670370516d STEP: Creating a pod to test consume configMaps Apr 24 13:17:04.358: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde" in namespace "projected-4122" to be "success or failure" Apr 24 13:17:04.364: INFO: Pod "pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36511ms Apr 24 13:17:06.368: INFO: Pod "pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010159651s Apr 24 13:17:08.372: INFO: Pod "pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01431449s STEP: Saw pod success Apr 24 13:17:08.372: INFO: Pod "pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde" satisfied condition "success or failure" Apr 24 13:17:08.375: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde container projected-configmap-volume-test: STEP: delete the pod Apr 24 13:17:08.435: INFO: Waiting for pod pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde to disappear Apr 24 13:17:08.451: INFO: Pod pod-projected-configmaps-66d53f54-a8f7-4b5f-a8da-729f938a4dde no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:17:08.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4122" for this suite. Apr 24 13:17:14.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:17:14.552: INFO: namespace projected-4122 deletion completed in 6.097799474s • [SLOW TEST:10.286 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:17:14.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8602.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8602.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8602.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8602.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8602.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8602.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 13:17:20.673: INFO: DNS probes using dns-8602/dns-test-66d35820-788f-47bc-a09f-e4d1999422cd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:17:20.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8602" for this suite. Apr 24 13:17:26.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:17:26.901: INFO: namespace dns-8602 deletion completed in 6.160585973s • [SLOW TEST:12.349 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:17:26.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:17:26.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2098" for this suite. Apr 24 13:17:33.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:17:33.073: INFO: namespace services-2098 deletion completed in 6.081821865s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.171 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:17:33.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3764 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3764 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3764 Apr 24 13:17:33.132: INFO: Found 0 stateful pods, waiting for 1 Apr 24 13:17:43.138: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 24 13:17:43.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:17:43.398: INFO: stderr: "I0424 13:17:43.265650 349 log.go:172] (0xc000476630) (0xc000628b40) Create stream\nI0424 13:17:43.265763 349 log.go:172] (0xc000476630) (0xc000628b40) Stream added, broadcasting: 1\nI0424 13:17:43.268583 349 log.go:172] (0xc000476630) Reply frame received for 1\nI0424 13:17:43.268668 349 log.go:172] (0xc000476630) (0xc000994000) Create stream\nI0424 13:17:43.268709 349 log.go:172] (0xc000476630) (0xc000994000) Stream added, broadcasting: 3\nI0424 13:17:43.270406 349 log.go:172] (0xc000476630) Reply frame received for 3\nI0424 13:17:43.270434 349 log.go:172] (0xc000476630) (0xc0009940a0) Create stream\nI0424 13:17:43.270442 349 log.go:172] (0xc000476630) (0xc0009940a0) Stream added, broadcasting: 5\nI0424 13:17:43.271368 349 log.go:172] (0xc000476630) Reply frame received for 5\nI0424 13:17:43.351959 349 log.go:172] (0xc000476630) Data frame received for 5\nI0424 13:17:43.352005 349 log.go:172] (0xc0009940a0) (5) Data frame handling\nI0424 13:17:43.352033 349 log.go:172] (0xc0009940a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:17:43.389836 349 log.go:172] (0xc000476630) Data frame received for 3\nI0424 13:17:43.389888 349 log.go:172] (0xc000994000) (3) Data frame handling\nI0424 13:17:43.389945 349 log.go:172] (0xc000994000) (3) Data frame sent\nI0424 13:17:43.389975 349 log.go:172] (0xc000476630) Data frame received for 3\nI0424 13:17:43.389997 349 log.go:172] (0xc000994000) (3) Data frame handling\nI0424 13:17:43.390022 349 log.go:172] (0xc000476630) Data frame received for 5\nI0424 13:17:43.390053 349 log.go:172] (0xc0009940a0) (5) Data frame handling\nI0424 13:17:43.391885 349 log.go:172] (0xc000476630) Data frame received for 1\nI0424 13:17:43.391932 349 log.go:172] (0xc000628b40) (1) Data frame handling\nI0424 13:17:43.391971 349 log.go:172] (0xc000628b40) (1) Data frame sent\nI0424 13:17:43.391995 349 log.go:172] (0xc000476630) (0xc000628b40) Stream removed, broadcasting: 1\nI0424 13:17:43.392037 349 log.go:172] (0xc000476630) Go away received\nI0424 13:17:43.392442 349 log.go:172] (0xc000476630) (0xc000628b40) Stream removed, broadcasting: 1\nI0424 13:17:43.392468 349 log.go:172] (0xc000476630) (0xc000994000) Stream removed, broadcasting: 3\nI0424 13:17:43.392480 349 log.go:172] (0xc000476630) (0xc0009940a0) Stream removed, broadcasting: 5\n" Apr 24 13:17:43.398: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:17:43.398: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:17:43.402: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 24 13:17:53.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:17:53.408: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:17:53.422: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999611s Apr 24 13:17:54.426: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994335245s Apr 24 13:17:55.431: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990133691s Apr 24 13:17:56.436: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984846158s Apr 24 13:17:57.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980002315s Apr 24 13:17:58.445: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975376284s Apr 24 13:17:59.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97079822s Apr 24 13:18:00.455: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966343984s Apr 24 13:18:01.459: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.961427872s Apr 24 13:18:02.464: INFO: Verifying statefulset ss doesn't scale past 1 for another 957.125455ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3764 Apr 24 13:18:03.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:18:03.700: INFO: stderr: "I0424 13:18:03.601931 371 log.go:172] (0xc0001168f0) (0xc0005c2a00) Create stream\nI0424 13:18:03.601996 371 log.go:172] (0xc0001168f0) (0xc0005c2a00) Stream added, broadcasting: 1\nI0424 13:18:03.607779 371 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0424 13:18:03.607826 371 log.go:172] (0xc0001168f0) (0xc0005c2140) Create stream\nI0424 13:18:03.607840 371 log.go:172] (0xc0001168f0) (0xc0005c2140) Stream added, broadcasting: 3\nI0424 13:18:03.608749 371 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0424 13:18:03.608794 371 log.go:172] (0xc0001168f0) (0xc000184000) Create stream\nI0424 13:18:03.608810 371 log.go:172] (0xc0001168f0) (0xc000184000) Stream added, broadcasting: 5\nI0424 13:18:03.609901 371 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0424 13:18:03.692491 371 log.go:172] (0xc0001168f0) Data frame received for 3\nI0424 13:18:03.692534 371 log.go:172] (0xc0005c2140) (3) Data frame handling\nI0424 13:18:03.692547 371 log.go:172] (0xc0005c2140) (3) Data frame sent\nI0424 13:18:03.692557 371 log.go:172] (0xc0001168f0) Data frame received for 3\nI0424 13:18:03.692565 371 log.go:172] (0xc0005c2140) (3) Data frame handling\nI0424 13:18:03.692593 371 log.go:172] (0xc0001168f0) Data frame received for 5\nI0424 13:18:03.692602 371 log.go:172] (0xc000184000) (5) Data frame handling\nI0424 13:18:03.692618 371 log.go:172] (0xc000184000) (5) Data frame sent\nI0424 13:18:03.692627 371 log.go:172] (0xc0001168f0) Data frame received for 5\nI0424 13:18:03.692635 371 log.go:172] (0xc000184000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:18:03.694448 371 log.go:172] (0xc0001168f0) Data frame received for 1\nI0424 13:18:03.694471 371 log.go:172] (0xc0005c2a00) (1) Data frame handling\nI0424 13:18:03.694481 371 log.go:172] (0xc0005c2a00) (1) Data frame sent\nI0424 13:18:03.694501 371 log.go:172] (0xc0001168f0) (0xc0005c2a00) Stream removed, broadcasting: 1\nI0424 13:18:03.694525 371 log.go:172] (0xc0001168f0) Go away received\nI0424 13:18:03.694849 371 log.go:172] (0xc0001168f0) (0xc0005c2a00) Stream removed, broadcasting: 1\nI0424 13:18:03.694864 371 log.go:172] (0xc0001168f0) (0xc0005c2140) Stream removed, broadcasting: 3\nI0424 13:18:03.694871 371 log.go:172] (0xc0001168f0) (0xc000184000) Stream removed, broadcasting: 5\n" Apr 24 13:18:03.700: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:18:03.700: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:18:03.704: INFO: Found 1 stateful pods, waiting for 3 Apr 24 13:18:13.708: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:18:13.708: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:18:13.708: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 24 13:18:13.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:18:13.933: INFO: stderr: "I0424 13:18:13.854300 393 log.go:172] (0xc0009ea420) (0xc0002c2820) Create stream\nI0424 13:18:13.854361 393 log.go:172] (0xc0009ea420) (0xc0002c2820) Stream added, broadcasting: 1\nI0424 13:18:13.856583 393 log.go:172] (0xc0009ea420) Reply frame received for 1\nI0424 13:18:13.856639 393 log.go:172] (0xc0009ea420) (0xc0007fe000) Create stream\nI0424 13:18:13.856657 393 log.go:172] (0xc0009ea420) (0xc0007fe000) Stream added, broadcasting: 3\nI0424 13:18:13.858093 393 log.go:172] (0xc0009ea420) Reply frame received for 3\nI0424 13:18:13.858138 393 log.go:172] (0xc0009ea420) (0xc0002c28c0) Create stream\nI0424 13:18:13.858155 393 log.go:172] (0xc0009ea420) (0xc0002c28c0) Stream added, broadcasting: 5\nI0424 13:18:13.859537 393 log.go:172] (0xc0009ea420) Reply frame received for 5\nI0424 13:18:13.925332 393 log.go:172] (0xc0009ea420) Data frame received for 3\nI0424 13:18:13.925393 393 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0424 13:18:13.925419 393 log.go:172] (0xc0007fe000) (3) Data frame sent\nI0424 13:18:13.925439 393 log.go:172] (0xc0009ea420) Data frame received for 3\nI0424 13:18:13.925456 393 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0424 13:18:13.925521 393 log.go:172] (0xc0009ea420) Data frame received for 5\nI0424 13:18:13.925583 393 log.go:172] (0xc0002c28c0) (5) Data frame handling\nI0424 13:18:13.925608 393 log.go:172] (0xc0002c28c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:18:13.925695 393 log.go:172] (0xc0009ea420) Data frame received for 5\nI0424 13:18:13.925722 393 log.go:172] (0xc0002c28c0) (5) Data frame handling\nI0424 13:18:13.927372 393 log.go:172] (0xc0009ea420) Data frame received for 1\nI0424 13:18:13.927411 393 log.go:172] (0xc0002c2820) (1) Data frame handling\nI0424 13:18:13.927444 393 log.go:172] (0xc0002c2820) (1) Data frame sent\nI0424 13:18:13.927472 393 log.go:172] (0xc0009ea420) (0xc0002c2820) Stream removed, broadcasting: 1\nI0424 13:18:13.927508 393 log.go:172] (0xc0009ea420) Go away received\nI0424 13:18:13.927815 393 log.go:172] (0xc0009ea420) (0xc0002c2820) Stream removed, broadcasting: 1\nI0424 13:18:13.927836 393 log.go:172] (0xc0009ea420) (0xc0007fe000) Stream removed, broadcasting: 3\nI0424 13:18:13.927844 393 log.go:172] (0xc0009ea420) (0xc0002c28c0) Stream removed, broadcasting: 5\n" Apr 24 13:18:13.933: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:18:13.933: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:18:13.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:18:14.190: INFO: stderr: "I0424 13:18:14.079294 415 log.go:172] (0xc0009480b0) (0xc00087a0a0) Create stream\nI0424 13:18:14.079340 415 log.go:172] (0xc0009480b0) (0xc00087a0a0) Stream added, broadcasting: 1\nI0424 13:18:14.090428 415 log.go:172] (0xc0009480b0) Reply frame received for 1\nI0424 13:18:14.090484 415 log.go:172] (0xc0009480b0) (0xc000934000) Create stream\nI0424 13:18:14.090497 415 log.go:172] (0xc0009480b0) (0xc000934000) Stream added, broadcasting: 3\nI0424 13:18:14.092243 415 log.go:172] (0xc0009480b0) Reply frame received for 3\nI0424 13:18:14.092287 415 log.go:172] (0xc0009480b0) (0xc0005fea00) Create stream\nI0424 13:18:14.092305 415 log.go:172] (0xc0009480b0) (0xc0005fea00) Stream added, broadcasting: 5\nI0424 13:18:14.092913 415 log.go:172] (0xc0009480b0) Reply frame received for 5\nI0424 13:18:14.148749 415 log.go:172] (0xc0009480b0) Data frame received for 5\nI0424 13:18:14.148776 415 log.go:172] (0xc0005fea00) (5) Data frame handling\nI0424 13:18:14.148792 415 log.go:172] (0xc0005fea00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:18:14.181612 415 log.go:172] (0xc0009480b0) Data frame received for 3\nI0424 13:18:14.181654 415 log.go:172] (0xc000934000) (3) Data frame handling\nI0424 13:18:14.181698 415 log.go:172] (0xc000934000) (3) Data frame sent\nI0424 13:18:14.181863 415 log.go:172] (0xc0009480b0) Data frame received for 5\nI0424 13:18:14.181882 415 log.go:172] (0xc0005fea00) (5) Data frame handling\nI0424 13:18:14.182065 415 log.go:172] (0xc0009480b0) Data frame received for 3\nI0424 13:18:14.182086 415 log.go:172] (0xc000934000) (3) Data frame handling\nI0424 13:18:14.183892 415 log.go:172] (0xc0009480b0) Data frame received for 1\nI0424 13:18:14.183916 415 log.go:172] (0xc00087a0a0) (1) Data frame handling\nI0424 13:18:14.183937 415 log.go:172] (0xc00087a0a0) (1) Data frame sent\nI0424 13:18:14.183960 415 log.go:172] (0xc0009480b0) (0xc00087a0a0) Stream removed, broadcasting: 1\nI0424 13:18:14.184110 415 log.go:172] (0xc0009480b0) Go away received\nI0424 13:18:14.184475 415 log.go:172] (0xc0009480b0) (0xc00087a0a0) Stream removed, broadcasting: 1\nI0424 13:18:14.184504 415 log.go:172] (0xc0009480b0) (0xc000934000) Stream removed, broadcasting: 3\nI0424 13:18:14.184517 415 log.go:172] (0xc0009480b0) (0xc0005fea00) Stream removed, broadcasting: 5\n" Apr 24 13:18:14.190: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:18:14.190: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:18:14.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:18:14.441: INFO: stderr: "I0424 13:18:14.318070 436 log.go:172] (0xc0006dea50) (0xc0005008c0) Create stream\nI0424 13:18:14.318128 436 log.go:172] (0xc0006dea50) (0xc0005008c0) Stream added, broadcasting: 1\nI0424 13:18:14.320512 436 log.go:172] (0xc0006dea50) Reply frame received for 1\nI0424 13:18:14.320566 436 log.go:172] (0xc0006dea50) (0xc00080c000) Create stream\nI0424 13:18:14.320585 436 log.go:172] (0xc0006dea50) (0xc00080c000) Stream added, broadcasting: 3\nI0424 13:18:14.321696 436 log.go:172] (0xc0006dea50) Reply frame received for 3\nI0424 13:18:14.321734 436 log.go:172] (0xc0006dea50) (0xc000500960) Create stream\nI0424 13:18:14.321755 436 log.go:172] (0xc0006dea50) (0xc000500960) Stream added, broadcasting: 5\nI0424 13:18:14.322782 436 log.go:172] (0xc0006dea50) Reply frame received for 5\nI0424 13:18:14.389641 436 log.go:172] (0xc0006dea50) Data frame received for 5\nI0424 13:18:14.389672 436 log.go:172] (0xc000500960) (5) Data frame handling\nI0424 13:18:14.389710 436 log.go:172] (0xc000500960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:18:14.433498 436 log.go:172] (0xc0006dea50) Data frame received for 3\nI0424 13:18:14.433524 436 log.go:172] (0xc00080c000) (3) Data frame handling\nI0424 13:18:14.433545 436 log.go:172] (0xc00080c000) (3) Data frame sent\nI0424 13:18:14.433770 436 log.go:172] (0xc0006dea50) Data frame received for 5\nI0424 13:18:14.433804 436 log.go:172] (0xc000500960) (5) Data frame handling\nI0424 13:18:14.433826 436 log.go:172] (0xc0006dea50) Data frame received for 3\nI0424 13:18:14.433836 436 log.go:172] (0xc00080c000) (3) Data frame handling\nI0424 13:18:14.435514 436 log.go:172] (0xc0006dea50) Data frame received for 1\nI0424 13:18:14.435554 436 log.go:172] (0xc0005008c0) (1) Data frame handling\nI0424 13:18:14.435588 436 log.go:172] (0xc0005008c0) (1) Data frame sent\nI0424 13:18:14.435614 436 log.go:172] (0xc0006dea50) (0xc0005008c0) Stream removed, broadcasting: 1\nI0424 13:18:14.435643 436 log.go:172] (0xc0006dea50) Go away received\nI0424 13:18:14.436109 436 log.go:172] (0xc0006dea50) (0xc0005008c0) Stream removed, broadcasting: 1\nI0424 13:18:14.436151 436 log.go:172] (0xc0006dea50) (0xc00080c000) Stream removed, broadcasting: 3\nI0424 13:18:14.436175 436 log.go:172] (0xc0006dea50) (0xc000500960) Stream removed, broadcasting: 5\n" Apr 24 13:18:14.442: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:18:14.442: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:18:14.442: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:18:14.446: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 24 13:18:24.479: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:18:24.479: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:18:24.479: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:18:24.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999561s Apr 24 13:18:25.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995648895s Apr 24 13:18:26.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990294551s Apr 24 13:18:27.505: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985416873s Apr 24 13:18:28.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980200131s Apr 24 13:18:29.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976332542s Apr 24 13:18:30.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970782741s Apr 24 13:18:31.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.965983192s Apr 24 13:18:32.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941903289s Apr 24 13:18:33.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 936.89382ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3764 Apr 24 13:18:34.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:18:34.832: INFO: stderr: "I0424 13:18:34.695529 456 log.go:172] (0xc0009fc420) (0xc0006bc6e0) Create stream\nI0424 13:18:34.695609 456 log.go:172] (0xc0009fc420) (0xc0006bc6e0) Stream added, broadcasting: 1\nI0424 13:18:34.699755 456 log.go:172] (0xc0009fc420) Reply frame received for 1\nI0424 13:18:34.699805 456 log.go:172] (0xc0009fc420) (0xc000696460) Create stream\nI0424 13:18:34.699824 456 log.go:172] (0xc0009fc420) (0xc000696460) Stream added, broadcasting: 3\nI0424 13:18:34.700704 456 log.go:172] (0xc0009fc420) Reply frame received for 3\nI0424 13:18:34.700735 456 log.go:172] (0xc0009fc420) (0xc0006bc000) Create stream\nI0424 13:18:34.700743 456 log.go:172] (0xc0009fc420) (0xc0006bc000) Stream added, broadcasting: 5\nI0424 13:18:34.701905 456 log.go:172] (0xc0009fc420) Reply frame received for 5\nI0424 13:18:34.824630 456 log.go:172] (0xc0009fc420) Data frame received for 3\nI0424 13:18:34.824650 456 log.go:172] (0xc000696460) (3) Data frame handling\nI0424 13:18:34.824671 456 log.go:172] (0xc0009fc420) Data frame received for 5\nI0424 13:18:34.824701 456 log.go:172] (0xc0006bc000) (5) Data frame handling\nI0424 13:18:34.824725 456 log.go:172] (0xc0006bc000) (5) Data frame sent\nI0424 13:18:34.824741 456 log.go:172] (0xc0009fc420) Data frame received for 5\nI0424 13:18:34.824754 456 log.go:172] (0xc0006bc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:18:34.824801 456 log.go:172] (0xc000696460) (3) Data frame sent\nI0424 13:18:34.824877 456 log.go:172] (0xc0009fc420) Data frame received for 3\nI0424 13:18:34.824896 456 log.go:172] (0xc000696460) (3) Data frame handling\nI0424 13:18:34.826884 456 log.go:172] (0xc0009fc420) Data frame received for 1\nI0424 13:18:34.826899 456 log.go:172] (0xc0006bc6e0) (1) Data frame handling\nI0424 13:18:34.826911 456 log.go:172] (0xc0006bc6e0) (1) Data frame sent\nI0424 13:18:34.826924 456 log.go:172] (0xc0009fc420) (0xc0006bc6e0) Stream removed, broadcasting: 1\nI0424 13:18:34.827136 456 log.go:172] (0xc0009fc420) Go away received\nI0424 13:18:34.827240 456 log.go:172] (0xc0009fc420) (0xc0006bc6e0) Stream removed, broadcasting: 1\nI0424 13:18:34.827312 456 log.go:172] (0xc0009fc420) (0xc000696460) Stream removed, broadcasting: 3\nI0424 13:18:34.827385 456 log.go:172] (0xc0009fc420) (0xc0006bc000) Stream removed, broadcasting: 5\n" Apr 24 13:18:34.832: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:18:34.832: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:18:34.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:18:35.040: INFO: stderr: "I0424 13:18:34.958129 477 log.go:172] (0xc000358370) (0xc0007006e0) Create stream\nI0424 13:18:34.958178 477 log.go:172] (0xc000358370) (0xc0007006e0) Stream added, broadcasting: 1\nI0424 13:18:34.960092 477 log.go:172] (0xc000358370) Reply frame received for 1\nI0424 13:18:34.960128 477 log.go:172] (0xc000358370) (0xc00059e280) Create stream\nI0424 13:18:34.960139 477 log.go:172] (0xc000358370) (0xc00059e280) Stream added, broadcasting: 3\nI0424 13:18:34.960899 477 log.go:172] (0xc000358370) Reply frame received for 3\nI0424 13:18:34.960942 477 log.go:172] (0xc000358370) (0xc000700780) Create stream\nI0424 13:18:34.960958 477 log.go:172] (0xc000358370) (0xc000700780) Stream added, broadcasting: 5\nI0424 13:18:34.962773 477 log.go:172] (0xc000358370) Reply frame received for 5\nI0424 13:18:35.032485 477 log.go:172] (0xc000358370) Data frame received for 3\nI0424 13:18:35.032508 477 log.go:172] (0xc00059e280) (3) Data frame handling\nI0424 13:18:35.032522 477 log.go:172] (0xc00059e280) (3) Data frame sent\nI0424 13:18:35.032764 477 log.go:172] (0xc000358370) Data frame received for 5\nI0424 13:18:35.032797 477 log.go:172] (0xc000700780) (5) Data frame handling\nI0424 13:18:35.032812 477 log.go:172] (0xc000700780) (5) Data frame sent\nI0424 13:18:35.032822 477 log.go:172] (0xc000358370) Data frame received for 5\nI0424 13:18:35.032829 477 log.go:172] (0xc000700780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:18:35.032849 477 log.go:172] (0xc000358370) Data frame received for 3\nI0424 13:18:35.032856 477 log.go:172] (0xc00059e280) (3) Data frame handling\nI0424 13:18:35.034496 477 log.go:172] (0xc000358370) Data frame received for 1\nI0424 13:18:35.034516 477 log.go:172] (0xc0007006e0) (1) Data frame handling\nI0424 13:18:35.034526 477 log.go:172] (0xc0007006e0) (1) Data frame sent\nI0424 13:18:35.034534 477 log.go:172] (0xc000358370) (0xc0007006e0) Stream removed, broadcasting: 1\nI0424 13:18:35.034565 477 log.go:172] (0xc000358370) Go away received\nI0424 13:18:35.034940 477 log.go:172] (0xc000358370) (0xc0007006e0) Stream removed, broadcasting: 1\nI0424 13:18:35.034958 477 log.go:172] (0xc000358370) (0xc00059e280) Stream removed, broadcasting: 3\nI0424 13:18:35.034965 477 log.go:172] (0xc000358370) (0xc000700780) Stream removed, broadcasting: 5\n" Apr 24 13:18:35.040: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:18:35.040: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:18:35.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3764 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:18:35.244: INFO: stderr: "I0424 13:18:35.169000 496 log.go:172] (0xc0002bc000) (0xc0008881e0) Create stream\nI0424 13:18:35.169067 496 log.go:172] (0xc0002bc000) (0xc0008881e0) Stream added, broadcasting: 1\nI0424 13:18:35.171560 496 log.go:172] (0xc0002bc000) Reply frame received for 1\nI0424 13:18:35.171595 496 log.go:172] (0xc0002bc000) (0xc00088e0a0) Create stream\nI0424 13:18:35.171605 496 log.go:172] (0xc0002bc000) (0xc00088e0a0) Stream added, broadcasting: 3\nI0424 13:18:35.172551 496 log.go:172] (0xc0002bc000) Reply frame received for 3\nI0424 13:18:35.172572 496 log.go:172] (0xc0002bc000) (0xc00088e140) Create stream\nI0424 13:18:35.172584 496 log.go:172] (0xc0002bc000) (0xc00088e140) Stream added, broadcasting: 5\nI0424 13:18:35.174007 496 log.go:172] (0xc0002bc000) Reply frame received for 5\nI0424 13:18:35.236366 496 log.go:172] (0xc0002bc000) Data frame received for 5\nI0424 13:18:35.236394 496 log.go:172] (0xc00088e140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:18:35.236425 496 log.go:172] (0xc0002bc000) Data frame received for 3\nI0424 13:18:35.236462 496 log.go:172] (0xc00088e0a0) (3) Data frame handling\nI0424 13:18:35.236480 496 log.go:172] (0xc00088e0a0) (3) Data frame sent\nI0424 13:18:35.236492 496 log.go:172] (0xc0002bc000) Data frame received for 3\nI0424 13:18:35.236507 496 log.go:172] (0xc00088e0a0) (3) Data frame handling\nI0424 13:18:35.236542 496 log.go:172] (0xc00088e140) (5) Data frame sent\nI0424 13:18:35.236563 496 log.go:172] (0xc0002bc000) Data frame received for 5\nI0424 13:18:35.236581 496 log.go:172] (0xc00088e140) (5) Data frame handling\nI0424 13:18:35.238365 496 log.go:172] (0xc0002bc000) Data frame received for 1\nI0424 13:18:35.238391 496 log.go:172] (0xc0008881e0) (1) Data frame handling\nI0424 13:18:35.238413 496 log.go:172] (0xc0008881e0) (1) Data frame sent\nI0424 13:18:35.238456 496 log.go:172] (0xc0002bc000) (0xc0008881e0) Stream removed, broadcasting: 1\nI0424 13:18:35.238499 496 log.go:172] (0xc0002bc000) Go away received\nI0424 13:18:35.238907 496 log.go:172] (0xc0002bc000) (0xc0008881e0) Stream removed, broadcasting: 1\nI0424 13:18:35.238928 496 log.go:172] (0xc0002bc000) (0xc00088e0a0) Stream removed, broadcasting: 3\nI0424 13:18:35.238944 496 log.go:172] (0xc0002bc000) (0xc00088e140) Stream removed, broadcasting: 5\n" Apr 24 13:18:35.244: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:18:35.244: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:18:35.244: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 24 13:18:55.260: INFO: Deleting all statefulset in ns statefulset-3764 Apr 24 13:18:55.263: INFO: Scaling statefulset ss to 0 Apr 24 13:18:55.272: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:18:55.274: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:18:55.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3764" for this suite. Apr 24 13:19:01.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:19:01.389: INFO: namespace statefulset-3764 deletion completed in 6.101001776s • [SLOW TEST:88.317 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:19:01.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 13:19:07.525: INFO: DNS probes using dns-7062/dns-test-7666e526-0754-48b0-8a7d-5b66322e4bff succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:19:07.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7062" for this suite. Apr 24 13:19:13.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:19:13.749: INFO: namespace dns-7062 deletion completed in 6.181531146s • [SLOW TEST:12.359 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:19:13.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 24 13:19:18.917: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:19:19.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2233" for this suite. Apr 24 13:19:42.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:19:42.107: INFO: namespace replicaset-2233 deletion completed in 22.167948616s • [SLOW TEST:28.357 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:19:42.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:19:42.156: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 24 13:19:42.192: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 24 13:19:47.196: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 24 13:19:47.196: INFO: Creating deployment "test-rolling-update-deployment" Apr 24 13:19:47.201: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 24 13:19:47.229: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 24 13:19:49.278: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 24 13:19:49.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331187, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331187, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331187, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331187, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:19:51.285: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 24 13:19:51.294: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9598,SelfLink:/apis/apps/v1/namespaces/deployment-9598/deployments/test-rolling-update-deployment,UID:8783a724-187a-47c1-ae31-38ca4a2d01b8,ResourceVersion:7179332,Generation:1,CreationTimestamp:2020-04-24 13:19:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-24 13:19:47 +0000 UTC 2020-04-24 13:19:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-24 13:19:50 +0000 UTC 2020-04-24 13:19:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 24 13:19:51.297: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9598,SelfLink:/apis/apps/v1/namespaces/deployment-9598/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:bbd272f6-8a4e-42d9-9f4d-fead71f8bcdd,ResourceVersion:7179321,Generation:1,CreationTimestamp:2020-04-24 13:19:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8783a724-187a-47c1-ae31-38ca4a2d01b8 0xc003065307 0xc003065308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 24 13:19:51.297: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 24 13:19:51.298: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9598,SelfLink:/apis/apps/v1/namespaces/deployment-9598/replicasets/test-rolling-update-controller,UID:22871a57-a8fd-420c-a4f0-8ee381ded783,ResourceVersion:7179330,Generation:2,CreationTimestamp:2020-04-24 13:19:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8783a724-187a-47c1-ae31-38ca4a2d01b8 0xc003065237 0xc003065238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 24 13:19:51.302: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-bt49t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-bt49t,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9598,SelfLink:/api/v1/namespaces/deployment-9598/pods/test-rolling-update-deployment-79f6b9d75c-bt49t,UID:12a0aa61-6317-485b-87a6-baa1b56ac855,ResourceVersion:7179320,Generation:0,CreationTimestamp:2020-04-24 13:19:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c bbd272f6-8a4e-42d9-9f4d-fead71f8bcdd 0xc002830cb7 0xc002830cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zk89k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zk89k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zk89k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002830d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002830d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:19:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:19:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:19:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:19:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.13,StartTime:2020-04-24 13:19:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-24 13:19:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://6e9d34db857d34610442b93b6a059456f492b2c5e9a4388c393006b249837a82}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:19:51.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9598" for this suite. Apr 24 13:19:57.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:19:57.390: INFO: namespace deployment-9598 deletion completed in 6.083854911s • [SLOW TEST:15.282 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:19:57.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-7c1c98dd-b977-4ff5-9dc0-8e59b857c396 STEP: Creating a pod to test consume secrets Apr 24 13:19:57.490: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17" in namespace "projected-220" to be "success or failure" Apr 24 13:19:57.494: INFO: Pod "pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189335ms Apr 24 13:19:59.497: INFO: Pod "pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00737916s Apr 24 13:20:01.502: INFO: Pod "pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011734053s STEP: Saw pod success Apr 24 13:20:01.502: INFO: Pod "pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17" satisfied condition "success or failure" Apr 24 13:20:01.505: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17 container projected-secret-volume-test: STEP: delete the pod Apr 24 13:20:01.526: INFO: Waiting for pod pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17 to disappear Apr 24 13:20:01.537: INFO: Pod pod-projected-secrets-c101a6b4-0cee-474d-878e-c0ccad059c17 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:20:01.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-220" for this suite. Apr 24 13:20:07.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:20:07.623: INFO: namespace projected-220 deletion completed in 6.082010142s • [SLOW TEST:10.233 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:20:07.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 24 13:20:07.713: INFO: Waiting up to 5m0s for pod "pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324" in namespace "emptydir-3614" to be "success or failure" Apr 24 13:20:07.722: INFO: Pod "pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324": Phase="Pending", Reason="", readiness=false. Elapsed: 9.24195ms Apr 24 13:20:09.728: INFO: Pod "pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015285597s Apr 24 13:20:11.746: INFO: Pod "pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03326296s STEP: Saw pod success Apr 24 13:20:11.746: INFO: Pod "pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324" satisfied condition "success or failure" Apr 24 13:20:11.749: INFO: Trying to get logs from node iruya-worker pod pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324 container test-container: STEP: delete the pod Apr 24 13:20:11.766: INFO: Waiting for pod pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324 to disappear Apr 24 13:20:11.776: INFO: Pod pod-b3aa52e2-a0a6-44cc-bcc1-aaa08c414324 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:20:11.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3614" for this suite. Apr 24 13:20:17.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:20:17.891: INFO: namespace emptydir-3614 deletion completed in 6.102073618s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:20:17.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:20:17.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-329' Apr 24 13:20:18.213: INFO: stderr: "" Apr 24 13:20:18.213: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 24 13:20:18.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-329' Apr 24 13:20:18.524: INFO: stderr: "" Apr 24 13:20:18.524: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 24 13:20:19.549: INFO: Selector matched 1 pods for map[app:redis] Apr 24 13:20:19.549: INFO: Found 0 / 1 Apr 24 13:20:20.528: INFO: Selector matched 1 pods for map[app:redis] Apr 24 13:20:20.528: INFO: Found 0 / 1 Apr 24 13:20:21.529: INFO: Selector matched 1 pods for map[app:redis] Apr 24 13:20:21.529: INFO: Found 0 / 1 Apr 24 13:20:22.529: INFO: Selector matched 1 pods for map[app:redis] Apr 24 13:20:22.529: INFO: Found 1 / 1 Apr 24 13:20:22.529: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 24 13:20:22.533: INFO: Selector matched 1 pods for map[app:redis] Apr 24 13:20:22.533: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 24 13:20:22.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-bcsw2 --namespace=kubectl-329' Apr 24 13:20:22.630: INFO: stderr: "" Apr 24 13:20:22.630: INFO: stdout: "Name: redis-master-bcsw2\nNamespace: kubectl-329\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Fri, 24 Apr 2020 13:20:18 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.62\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://da06ae2a5ee077ac6193c71462a706bc0b315ad3f1f7a7cc964bc57ec3b3e38f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 24 Apr 2020 13:20:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-g9sql (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-g9sql:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-g9sql\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-329/redis-master-bcsw2 to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 2s kubelet, iruya-worker2 Started container redis-master\n" Apr 24 13:20:22.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-329' Apr 24 13:20:22.745: INFO: stderr: "" Apr 24 13:20:22.745: INFO: stdout: "Name: redis-master\nNamespace: kubectl-329\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-bcsw2\n" Apr 24 13:20:22.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-329' Apr 24 13:20:22.861: INFO: stderr: "" Apr 24 13:20:22.861: INFO: stdout: "Name: redis-master\nNamespace: kubectl-329\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.102.39.197\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.62:6379\nSession Affinity: None\nEvents: \n" Apr 24 13:20:22.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 24 13:20:22.999: INFO: stderr: "" Apr 24 13:20:22.999: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 24 Apr 2020 13:19:59 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 24 Apr 2020 13:19:59 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 24 Apr 2020 13:19:59 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 24 Apr 2020 13:19:59 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 39d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 39d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 39d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 39d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 24 13:20:22.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-329' Apr 24 13:20:23.102: INFO: stderr: "" Apr 24 13:20:23.102: INFO: stdout: "Name: kubectl-329\nLabels: e2e-framework=kubectl\n e2e-run=cf7469f2-0c4a-44d6-b0e9-909d5672bbe4\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:20:23.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-329" for this suite. Apr 24 13:20:45.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:20:45.207: INFO: namespace kubectl-329 deletion completed in 22.101623971s • [SLOW TEST:27.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:20:45.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:20:49.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6681" for this suite. Apr 24 13:21:27.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:21:27.389: INFO: namespace kubelet-test-6681 deletion completed in 38.088414447s • [SLOW TEST:42.182 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:21:27.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1d11c6ed-4743-4352-bff0-bc331137cb4a STEP: Creating a pod to test consume configMaps Apr 24 13:21:27.457: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa" in namespace "projected-9170" to be "success or failure" Apr 24 13:21:27.496: INFO: Pod "pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa": Phase="Pending", Reason="", readiness=false. Elapsed: 39.650291ms Apr 24 13:21:29.501: INFO: Pod "pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044406496s Apr 24 13:21:31.506: INFO: Pod "pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049291161s STEP: Saw pod success Apr 24 13:21:31.506: INFO: Pod "pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa" satisfied condition "success or failure" Apr 24 13:21:31.509: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa container projected-configmap-volume-test: STEP: delete the pod Apr 24 13:21:31.570: INFO: Waiting for pod pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa to disappear Apr 24 13:21:31.587: INFO: Pod pod-projected-configmaps-ae3fb45e-1c29-4528-8bc4-d7ba04c118fa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:21:31.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9170" for this suite. Apr 24 13:21:37.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:21:37.728: INFO: namespace projected-9170 deletion completed in 6.136442138s • [SLOW TEST:10.338 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:21:37.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-912588be-4404-4948-8d3a-1e9319b67654 STEP: Creating a pod to test consume configMaps Apr 24 13:21:37.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b" in namespace "configmap-9855" to be "success or failure" Apr 24 13:21:37.826: INFO: Pod "pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.945255ms Apr 24 13:21:39.830: INFO: Pod "pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013546144s Apr 24 13:21:41.833: INFO: Pod "pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017062076s STEP: Saw pod success Apr 24 13:21:41.833: INFO: Pod "pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b" satisfied condition "success or failure" Apr 24 13:21:41.836: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b container configmap-volume-test: STEP: delete the pod Apr 24 13:21:41.892: INFO: Waiting for pod pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b to disappear Apr 24 13:21:41.921: INFO: Pod pod-configmaps-5f69b8e5-3a3c-4c5b-b620-4449da2bda4b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:21:41.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9855" for this suite. Apr 24 13:21:47.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:21:48.072: INFO: namespace configmap-9855 deletion completed in 6.146273444s • [SLOW TEST:10.343 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:21:48.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:21:48.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c" in namespace "downward-api-2492" to be "success or failure" Apr 24 13:21:48.156: INFO: Pod "downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.334623ms Apr 24 13:21:50.160: INFO: Pod "downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014282383s Apr 24 13:21:52.164: INFO: Pod "downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018100187s STEP: Saw pod success Apr 24 13:21:52.164: INFO: Pod "downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c" satisfied condition "success or failure" Apr 24 13:21:52.167: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c container client-container: STEP: delete the pod Apr 24 13:21:52.192: INFO: Waiting for pod downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c to disappear Apr 24 13:21:52.199: INFO: Pod downwardapi-volume-3324f164-1bab-457e-83ea-284ae9b63d6c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:21:52.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2492" for this suite. Apr 24 13:21:58.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:21:58.329: INFO: namespace downward-api-2492 deletion completed in 6.126327809s • [SLOW TEST:10.257 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:21:58.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:21:58.394: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.335412ms) Apr 24 13:21:58.397: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.279663ms) Apr 24 13:21:58.400: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.902719ms) Apr 24 13:21:58.403: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.705728ms) Apr 24 13:21:58.405: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.759029ms) Apr 24 13:21:58.408: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.914175ms) Apr 24 13:21:58.411: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.944908ms) Apr 24 13:21:58.414: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.685086ms) Apr 24 13:21:58.417: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.061826ms) Apr 24 13:21:58.420: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.827476ms) Apr 24 13:21:58.423: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.802429ms) Apr 24 13:21:58.426: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.870886ms) Apr 24 13:21:58.428: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.73689ms) Apr 24 13:21:58.431: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.945729ms) Apr 24 13:21:58.434: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.016913ms) Apr 24 13:21:58.438: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.865722ms) Apr 24 13:21:58.442: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.273373ms) Apr 24 13:21:58.445: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.76927ms) Apr 24 13:21:58.449: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.588389ms) Apr 24 13:21:58.452: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.229467ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:21:58.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3278" for this suite. Apr 24 13:22:04.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:22:04.546: INFO: namespace proxy-3278 deletion completed in 6.0896414s • [SLOW TEST:6.217 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:22:04.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:22:04.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d" in namespace "projected-3673" to be "success or failure" Apr 24 13:22:04.618: INFO: Pod "downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.658563ms Apr 24 13:22:06.622: INFO: Pod "downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007806387s Apr 24 13:22:08.626: INFO: Pod "downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012466994s STEP: Saw pod success Apr 24 13:22:08.626: INFO: Pod "downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d" satisfied condition "success or failure" Apr 24 13:22:08.630: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d container client-container: STEP: delete the pod Apr 24 13:22:08.649: INFO: Waiting for pod downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d to disappear Apr 24 13:22:08.653: INFO: Pod downwardapi-volume-348efb6d-06cb-4cf8-a479-062a30bdc00d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:22:08.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3673" for this suite. Apr 24 13:22:14.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:22:14.748: INFO: namespace projected-3673 deletion completed in 6.092234226s • [SLOW TEST:10.202 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:22:14.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:22:14.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170" in namespace "downward-api-1721" to be "success or failure" Apr 24 13:22:14.875: INFO: Pod "downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170": Phase="Pending", Reason="", readiness=false. Elapsed: 39.651412ms Apr 24 13:22:16.916: INFO: Pod "downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081106833s Apr 24 13:22:19.611: INFO: Pod "downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.775680285s STEP: Saw pod success Apr 24 13:22:19.611: INFO: Pod "downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170" satisfied condition "success or failure" Apr 24 13:22:19.614: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170 container client-container: STEP: delete the pod Apr 24 13:22:19.644: INFO: Waiting for pod downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170 to disappear Apr 24 13:22:19.654: INFO: Pod downwardapi-volume-ac15c70f-58dd-461c-8472-0a1addfdf170 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:22:19.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1721" for this suite. Apr 24 13:22:25.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:22:25.767: INFO: namespace downward-api-1721 deletion completed in 6.109027502s • [SLOW TEST:11.018 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:22:25.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:22:25.842: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87" in namespace "downward-api-2586" to be "success or failure" Apr 24 13:22:25.876: INFO: Pod "downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87": Phase="Pending", Reason="", readiness=false. Elapsed: 33.786663ms Apr 24 13:22:27.879: INFO: Pod "downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037110139s Apr 24 13:22:29.882: INFO: Pod "downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040658421s STEP: Saw pod success Apr 24 13:22:29.883: INFO: Pod "downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87" satisfied condition "success or failure" Apr 24 13:22:29.885: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87 container client-container: STEP: delete the pod Apr 24 13:22:30.008: INFO: Waiting for pod downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87 to disappear Apr 24 13:22:30.019: INFO: Pod downwardapi-volume-be524130-62d8-4aa5-bb51-85b1a322ec87 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:22:30.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2586" for this suite. Apr 24 13:22:36.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:22:36.106: INFO: namespace downward-api-2586 deletion completed in 6.082237981s • [SLOW TEST:10.338 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:22:36.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 24 13:22:36.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6345' Apr 24 13:22:38.691: INFO: stderr: "" Apr 24 13:22:38.691: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 13:22:38.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6345' Apr 24 13:22:38.843: INFO: stderr: "" Apr 24 13:22:38.843: INFO: stdout: "update-demo-nautilus-4g7pj update-demo-nautilus-58pb8 " Apr 24 13:22:38.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:22:38.943: INFO: stderr: "" Apr 24 13:22:38.943: INFO: stdout: "" Apr 24 13:22:38.943: INFO: update-demo-nautilus-4g7pj is created but not running Apr 24 13:22:43.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6345' Apr 24 13:22:44.057: INFO: stderr: "" Apr 24 13:22:44.057: INFO: stdout: "update-demo-nautilus-4g7pj update-demo-nautilus-58pb8 " Apr 24 13:22:44.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:22:44.148: INFO: stderr: "" Apr 24 13:22:44.148: INFO: stdout: "true" Apr 24 13:22:44.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7pj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:22:44.240: INFO: stderr: "" Apr 24 13:22:44.240: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:22:44.240: INFO: validating pod update-demo-nautilus-4g7pj Apr 24 13:22:44.244: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:22:44.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:22:44.244: INFO: update-demo-nautilus-4g7pj is verified up and running Apr 24 13:22:44.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58pb8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:22:44.343: INFO: stderr: "" Apr 24 13:22:44.343: INFO: stdout: "true" Apr 24 13:22:44.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58pb8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:22:44.438: INFO: stderr: "" Apr 24 13:22:44.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:22:44.438: INFO: validating pod update-demo-nautilus-58pb8 Apr 24 13:22:44.442: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:22:44.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:22:44.442: INFO: update-demo-nautilus-58pb8 is verified up and running STEP: rolling-update to new replication controller Apr 24 13:22:44.445: INFO: scanned /root for discovery docs: Apr 24 13:22:44.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6345' Apr 24 13:23:07.032: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 24 13:23:07.032: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 13:23:07.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6345' Apr 24 13:23:07.129: INFO: stderr: "" Apr 24 13:23:07.129: INFO: stdout: "update-demo-kitten-p2bkl update-demo-kitten-pbjkb " Apr 24 13:23:07.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p2bkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:23:07.228: INFO: stderr: "" Apr 24 13:23:07.228: INFO: stdout: "true" Apr 24 13:23:07.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p2bkl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:23:07.323: INFO: stderr: "" Apr 24 13:23:07.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 24 13:23:07.323: INFO: validating pod update-demo-kitten-p2bkl Apr 24 13:23:07.327: INFO: got data: { "image": "kitten.jpg" } Apr 24 13:23:07.327: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 24 13:23:07.327: INFO: update-demo-kitten-p2bkl is verified up and running Apr 24 13:23:07.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pbjkb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:23:07.420: INFO: stderr: "" Apr 24 13:23:07.420: INFO: stdout: "true" Apr 24 13:23:07.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pbjkb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6345' Apr 24 13:23:07.504: INFO: stderr: "" Apr 24 13:23:07.504: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 24 13:23:07.504: INFO: validating pod update-demo-kitten-pbjkb Apr 24 13:23:07.507: INFO: got data: { "image": "kitten.jpg" } Apr 24 13:23:07.507: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 24 13:23:07.507: INFO: update-demo-kitten-pbjkb is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:23:07.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6345" for this suite. Apr 24 13:23:31.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:23:31.616: INFO: namespace kubectl-6345 deletion completed in 24.105768875s • [SLOW TEST:55.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:23:31.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:23:31.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2227" for this suite. Apr 24 13:23:53.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:23:53.975: INFO: namespace pods-2227 deletion completed in 22.247148503s • [SLOW TEST:22.358 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:23:53.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:23:58.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6772" for this suite. Apr 24 13:24:04.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:24:04.161: INFO: namespace kubelet-test-6772 deletion completed in 6.086926568s • [SLOW TEST:10.186 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:24:04.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 24 13:24:04.283: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2380,SelfLink:/api/v1/namespaces/watch-2380/configmaps/e2e-watch-test-resource-version,UID:da401193-6a98-44c4-81c1-bf462909a73d,ResourceVersion:7180292,Generation:0,CreationTimestamp:2020-04-24 13:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 13:24:04.283: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2380,SelfLink:/api/v1/namespaces/watch-2380/configmaps/e2e-watch-test-resource-version,UID:da401193-6a98-44c4-81c1-bf462909a73d,ResourceVersion:7180293,Generation:0,CreationTimestamp:2020-04-24 13:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:24:04.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2380" for this suite. Apr 24 13:24:10.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:24:10.396: INFO: namespace watch-2380 deletion completed in 6.109911948s • [SLOW TEST:6.235 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:24:10.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:24:10.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8" in namespace "projected-4739" to be "success or failure" Apr 24 13:24:10.482: INFO: Pod "downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914073ms Apr 24 13:24:12.486: INFO: Pod "downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007889182s Apr 24 13:24:14.490: INFO: Pod "downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01188952s STEP: Saw pod success Apr 24 13:24:14.490: INFO: Pod "downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8" satisfied condition "success or failure" Apr 24 13:24:14.493: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8 container client-container: STEP: delete the pod Apr 24 13:24:14.513: INFO: Waiting for pod downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8 to disappear Apr 24 13:24:14.518: INFO: Pod downwardapi-volume-d92d2305-40b7-4b39-b0dd-6741025669a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:24:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4739" for this suite. Apr 24 13:24:20.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:24:20.625: INFO: namespace projected-4739 deletion completed in 6.104895711s • [SLOW TEST:10.228 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:24:20.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:24:24.769: INFO: Waiting up to 5m0s for pod "client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08" in namespace "pods-5194" to be "success or failure" Apr 24 13:24:24.773: INFO: Pod "client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987759ms Apr 24 13:24:26.777: INFO: Pod "client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008236142s Apr 24 13:24:28.782: INFO: Pod "client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01321916s STEP: Saw pod success Apr 24 13:24:28.782: INFO: Pod "client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08" satisfied condition "success or failure" Apr 24 13:24:28.785: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08 container env3cont: STEP: delete the pod Apr 24 13:24:28.802: INFO: Waiting for pod client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08 to disappear Apr 24 13:24:28.806: INFO: Pod client-envvars-d1bd6625-60f5-420e-bee9-652f14ba4e08 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:24:28.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5194" for this suite. Apr 24 13:25:16.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:25:16.917: INFO: namespace pods-5194 deletion completed in 48.107073743s • [SLOW TEST:56.291 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:25:16.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 24 13:25:27.011: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:27.016: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:29.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:29.020: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:31.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:31.021: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:33.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:33.020: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:35.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:35.021: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:37.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:37.020: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:39.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:39.021: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:41.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:41.021: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 13:25:43.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 13:25:43.021: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:25:43.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9250" for this suite. Apr 24 13:26:05.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:26:05.124: INFO: namespace container-lifecycle-hook-9250 deletion completed in 22.089931295s • [SLOW TEST:48.207 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:26:05.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:26:05.157: INFO: Creating ReplicaSet my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9 Apr 24 13:26:05.200: INFO: Pod name my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9: Found 0 pods out of 1 Apr 24 13:26:10.205: INFO: Pod name my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9: Found 1 pods out of 1 Apr 24 13:26:10.206: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9" is running Apr 24 13:26:10.209: INFO: Pod "my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9-2c7hl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:26:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:26:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:26:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:26:05 +0000 UTC Reason: Message:}]) Apr 24 13:26:10.209: INFO: Trying to dial the pod Apr 24 13:26:15.223: INFO: Controller my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9: Got expected result from replica 1 [my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9-2c7hl]: "my-hostname-basic-427c0de2-19cf-449e-961f-5789fccad3a9-2c7hl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:26:15.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1276" for this suite. Apr 24 13:26:21.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:26:21.329: INFO: namespace replicaset-1276 deletion completed in 6.103241625s • [SLOW TEST:16.204 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:26:21.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 24 13:26:21.372: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:26:21.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6327" for this suite. Apr 24 13:26:27.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:26:27.555: INFO: namespace kubectl-6327 deletion completed in 6.090997903s • [SLOW TEST:6.226 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:26:27.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-df9f4828-3f42-45ea-aa25-7ca9b21241bb STEP: Creating a pod to test consume secrets Apr 24 13:26:27.652: INFO: Waiting up to 5m0s for pod "pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25" in namespace "secrets-200" to be "success or failure" Apr 24 13:26:27.665: INFO: Pod "pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25": Phase="Pending", Reason="", readiness=false. Elapsed: 12.912235ms Apr 24 13:26:29.668: INFO: Pod "pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016402375s Apr 24 13:26:31.673: INFO: Pod "pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020863882s STEP: Saw pod success Apr 24 13:26:31.673: INFO: Pod "pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25" satisfied condition "success or failure" Apr 24 13:26:31.675: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25 container secret-volume-test: STEP: delete the pod Apr 24 13:26:31.711: INFO: Waiting for pod pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25 to disappear Apr 24 13:26:31.730: INFO: Pod pod-secrets-a6eabfc1-83e5-4fce-817c-d2b7c792ab25 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:26:31.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-200" for this suite. Apr 24 13:26:37.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:26:37.822: INFO: namespace secrets-200 deletion completed in 6.088138204s • [SLOW TEST:10.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:26:37.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a940c3a2-ed7c-4a54-a7bc-4dcb7433018d in namespace container-probe-1983 Apr 24 13:26:41.915: INFO: Started pod busybox-a940c3a2-ed7c-4a54-a7bc-4dcb7433018d in namespace container-probe-1983 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 13:26:41.918: INFO: Initial restart count of pod busybox-a940c3a2-ed7c-4a54-a7bc-4dcb7433018d is 0 Apr 24 13:27:36.035: INFO: Restart count of pod container-probe-1983/busybox-a940c3a2-ed7c-4a54-a7bc-4dcb7433018d is now 1 (54.116868359s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:27:36.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1983" for this suite. Apr 24 13:27:42.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:27:42.149: INFO: namespace container-probe-1983 deletion completed in 6.099053742s • [SLOW TEST:64.327 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:27:42.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:27:42.239: INFO: Creating deployment "test-recreate-deployment" Apr 24 13:27:42.243: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 24 13:27:42.272: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 24 13:27:44.280: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 24 13:27:44.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331662, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331662, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331662, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723331662, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:27:46.310: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 24 13:27:46.316: INFO: Updating deployment test-recreate-deployment Apr 24 13:27:46.316: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 24 13:27:46.503: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5787,SelfLink:/apis/apps/v1/namespaces/deployment-5787/deployments/test-recreate-deployment,UID:6db8dc5e-d8d3-4756-a199-014af86942b6,ResourceVersion:7180980,Generation:2,CreationTimestamp:2020-04-24 13:27:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-24 13:27:46 +0000 UTC 2020-04-24 13:27:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-24 13:27:46 +0000 UTC 2020-04-24 13:27:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 24 13:27:46.521: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5787,SelfLink:/apis/apps/v1/namespaces/deployment-5787/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5063f249-d9a9-4322-8861-02fba9aff67c,ResourceVersion:7180978,Generation:1,CreationTimestamp:2020-04-24 13:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6db8dc5e-d8d3-4756-a199-014af86942b6 0xc0031070b7 0xc0031070b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 24 13:27:46.521: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 24 13:27:46.521: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5787,SelfLink:/apis/apps/v1/namespaces/deployment-5787/replicasets/test-recreate-deployment-6df85df6b9,UID:7c10c2c5-5dfc-486e-ab6e-ae65c9f6873c,ResourceVersion:7180969,Generation:2,CreationTimestamp:2020-04-24 13:27:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6db8dc5e-d8d3-4756-a199-014af86942b6 0xc003107197 0xc003107198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 24 13:27:46.524: INFO: Pod "test-recreate-deployment-5c8c9cc69d-ntrvq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-ntrvq,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5787,SelfLink:/api/v1/namespaces/deployment-5787/pods/test-recreate-deployment-5c8c9cc69d-ntrvq,UID:af439a88-73ed-4eb8-a947-69f0557e3d2e,ResourceVersion:7180981,Generation:0,CreationTimestamp:2020-04-24 13:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5063f249-d9a9-4322-8861-02fba9aff67c 0xc003107a47 0xc003107a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4nrmk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4nrmk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4nrmk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003107ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003107ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:27:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:27:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:27:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:27:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-24 13:27:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:27:46.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5787" for this suite. Apr 24 13:27:52.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:27:52.621: INFO: namespace deployment-5787 deletion completed in 6.094050181s • [SLOW TEST:10.471 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:27:52.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:27:56.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9190" for this suite. Apr 24 13:28:02.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:28:02.917: INFO: namespace emptydir-wrapper-9190 deletion completed in 6.117715957s • [SLOW TEST:10.295 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:28:02.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8388 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8388 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8388 Apr 24 13:28:03.038: INFO: Found 0 stateful pods, waiting for 1 Apr 24 13:28:13.043: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 24 13:28:13.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:28:13.320: INFO: stderr: "I0424 13:28:13.195952 975 log.go:172] (0xc000ab0420) (0xc000336820) Create stream\nI0424 13:28:13.196020 975 log.go:172] (0xc000ab0420) (0xc000336820) Stream added, broadcasting: 1\nI0424 13:28:13.199095 975 log.go:172] (0xc000ab0420) Reply frame received for 1\nI0424 13:28:13.199151 975 log.go:172] (0xc000ab0420) (0xc000a26000) Create stream\nI0424 13:28:13.199178 975 log.go:172] (0xc000ab0420) (0xc000a26000) Stream added, broadcasting: 3\nI0424 13:28:13.200721 975 log.go:172] (0xc000ab0420) Reply frame received for 3\nI0424 13:28:13.201047 975 log.go:172] (0xc000ab0420) (0xc000336000) Create stream\nI0424 13:28:13.201076 975 log.go:172] (0xc000ab0420) (0xc000336000) Stream added, broadcasting: 5\nI0424 13:28:13.202320 975 log.go:172] (0xc000ab0420) Reply frame received for 5\nI0424 13:28:13.282272 975 log.go:172] (0xc000ab0420) Data frame received for 5\nI0424 13:28:13.282307 975 log.go:172] (0xc000336000) (5) Data frame handling\nI0424 13:28:13.282328 975 log.go:172] (0xc000336000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:28:13.311475 975 log.go:172] (0xc000ab0420) Data frame received for 3\nI0424 13:28:13.311509 975 log.go:172] (0xc000a26000) (3) Data frame handling\nI0424 13:28:13.311544 975 log.go:172] (0xc000a26000) (3) Data frame sent\nI0424 13:28:13.311572 975 log.go:172] (0xc000ab0420) Data frame received for 3\nI0424 13:28:13.311589 975 log.go:172] (0xc000a26000) (3) Data frame handling\nI0424 13:28:13.311622 975 log.go:172] (0xc000ab0420) Data frame received for 5\nI0424 13:28:13.311668 975 log.go:172] (0xc000336000) (5) Data frame handling\nI0424 13:28:13.314023 975 log.go:172] (0xc000ab0420) Data frame received for 1\nI0424 13:28:13.314045 975 log.go:172] (0xc000336820) (1) Data frame handling\nI0424 13:28:13.314056 975 log.go:172] (0xc000336820) (1) Data frame sent\nI0424 13:28:13.314068 975 log.go:172] (0xc000ab0420) (0xc000336820) Stream removed, broadcasting: 1\nI0424 13:28:13.314082 975 log.go:172] (0xc000ab0420) Go away received\nI0424 13:28:13.314508 975 log.go:172] (0xc000ab0420) (0xc000336820) Stream removed, broadcasting: 1\nI0424 13:28:13.314535 975 log.go:172] (0xc000ab0420) (0xc000a26000) Stream removed, broadcasting: 3\nI0424 13:28:13.314554 975 log.go:172] (0xc000ab0420) (0xc000336000) Stream removed, broadcasting: 5\n" Apr 24 13:28:13.320: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:28:13.320: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:28:13.324: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 24 13:28:23.329: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:28:23.329: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:28:23.358: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:23.358: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:23.358: INFO: Apr 24 13:28:23.358: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 24 13:28:24.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982086769s Apr 24 13:28:25.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977690772s Apr 24 13:28:26.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973592772s Apr 24 13:28:27.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969037624s Apr 24 13:28:28.380: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965272677s Apr 24 13:28:29.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960307356s Apr 24 13:28:30.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.956816701s Apr 24 13:28:31.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952563707s Apr 24 13:28:32.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.496875ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8388 Apr 24 13:28:33.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:28:33.618: INFO: stderr: "I0424 13:28:33.539040 996 log.go:172] (0xc0009c2420) (0xc0006c86e0) Create stream\nI0424 13:28:33.539105 996 log.go:172] (0xc0009c2420) (0xc0006c86e0) Stream added, broadcasting: 1\nI0424 13:28:33.542573 996 log.go:172] (0xc0009c2420) Reply frame received for 1\nI0424 13:28:33.542640 996 log.go:172] (0xc0009c2420) (0xc0006c8000) Create stream\nI0424 13:28:33.542664 996 log.go:172] (0xc0009c2420) (0xc0006c8000) Stream added, broadcasting: 3\nI0424 13:28:33.543610 996 log.go:172] (0xc0009c2420) Reply frame received for 3\nI0424 13:28:33.543649 996 log.go:172] (0xc0009c2420) (0xc0006c80a0) Create stream\nI0424 13:28:33.543661 996 log.go:172] (0xc0009c2420) (0xc0006c80a0) Stream added, broadcasting: 5\nI0424 13:28:33.544563 996 log.go:172] (0xc0009c2420) Reply frame received for 5\nI0424 13:28:33.609979 996 log.go:172] (0xc0009c2420) Data frame received for 5\nI0424 13:28:33.610014 996 log.go:172] (0xc0006c80a0) (5) Data frame handling\nI0424 13:28:33.610025 996 log.go:172] (0xc0006c80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:28:33.610038 996 log.go:172] (0xc0009c2420) Data frame received for 3\nI0424 13:28:33.610043 996 log.go:172] (0xc0006c8000) (3) Data frame handling\nI0424 13:28:33.610049 996 log.go:172] (0xc0006c8000) (3) Data frame sent\nI0424 13:28:33.610055 996 log.go:172] (0xc0009c2420) Data frame received for 3\nI0424 13:28:33.610062 996 log.go:172] (0xc0006c8000) (3) Data frame handling\nI0424 13:28:33.610100 996 log.go:172] (0xc0009c2420) Data frame received for 5\nI0424 13:28:33.610124 996 log.go:172] (0xc0006c80a0) (5) Data frame handling\nI0424 13:28:33.611658 996 log.go:172] (0xc0009c2420) Data frame received for 1\nI0424 13:28:33.611685 996 log.go:172] (0xc0006c86e0) (1) Data frame handling\nI0424 13:28:33.611699 996 log.go:172] (0xc0006c86e0) (1) Data frame sent\nI0424 13:28:33.611711 996 log.go:172] (0xc0009c2420) (0xc0006c86e0) Stream removed, broadcasting: 1\nI0424 13:28:33.611721 996 log.go:172] (0xc0009c2420) Go away received\nI0424 13:28:33.612132 996 log.go:172] (0xc0009c2420) (0xc0006c86e0) Stream removed, broadcasting: 1\nI0424 13:28:33.612154 996 log.go:172] (0xc0009c2420) (0xc0006c8000) Stream removed, broadcasting: 3\nI0424 13:28:33.612169 996 log.go:172] (0xc0009c2420) (0xc0006c80a0) Stream removed, broadcasting: 5\n" Apr 24 13:28:33.618: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:28:33.618: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:28:33.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:28:33.826: INFO: stderr: "I0424 13:28:33.740670 1018 log.go:172] (0xc000116840) (0xc0003b06e0) Create stream\nI0424 13:28:33.740722 1018 log.go:172] (0xc000116840) (0xc0003b06e0) Stream added, broadcasting: 1\nI0424 13:28:33.745082 1018 log.go:172] (0xc000116840) Reply frame received for 1\nI0424 13:28:33.745207 1018 log.go:172] (0xc000116840) (0xc0003b0000) Create stream\nI0424 13:28:33.745219 1018 log.go:172] (0xc000116840) (0xc0003b0000) Stream added, broadcasting: 3\nI0424 13:28:33.746310 1018 log.go:172] (0xc000116840) Reply frame received for 3\nI0424 13:28:33.746369 1018 log.go:172] (0xc000116840) (0xc0006a0320) Create stream\nI0424 13:28:33.746386 1018 log.go:172] (0xc000116840) (0xc0006a0320) Stream added, broadcasting: 5\nI0424 13:28:33.747701 1018 log.go:172] (0xc000116840) Reply frame received for 5\nI0424 13:28:33.819228 1018 log.go:172] (0xc000116840) Data frame received for 3\nI0424 13:28:33.819279 1018 log.go:172] (0xc0003b0000) (3) Data frame handling\nI0424 13:28:33.819298 1018 log.go:172] (0xc0003b0000) (3) Data frame sent\nI0424 13:28:33.819336 1018 log.go:172] (0xc000116840) Data frame received for 3\nI0424 13:28:33.819363 1018 log.go:172] (0xc0003b0000) (3) Data frame handling\nI0424 13:28:33.819384 1018 log.go:172] (0xc000116840) Data frame received for 5\nI0424 13:28:33.819406 1018 log.go:172] (0xc0006a0320) (5) Data frame handling\nI0424 13:28:33.819460 1018 log.go:172] (0xc0006a0320) (5) Data frame sent\nI0424 13:28:33.819477 1018 log.go:172] (0xc000116840) Data frame received for 5\nI0424 13:28:33.819514 1018 log.go:172] (0xc0006a0320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0424 13:28:33.821023 1018 log.go:172] (0xc000116840) Data frame received for 1\nI0424 13:28:33.821061 1018 log.go:172] (0xc0003b06e0) (1) Data frame handling\nI0424 13:28:33.821077 1018 log.go:172] (0xc0003b06e0) (1) Data frame sent\nI0424 13:28:33.821091 1018 log.go:172] (0xc000116840) (0xc0003b06e0) Stream removed, broadcasting: 1\nI0424 13:28:33.821107 1018 log.go:172] (0xc000116840) Go away received\nI0424 13:28:33.821630 1018 log.go:172] (0xc000116840) (0xc0003b06e0) Stream removed, broadcasting: 1\nI0424 13:28:33.821660 1018 log.go:172] (0xc000116840) (0xc0003b0000) Stream removed, broadcasting: 3\nI0424 13:28:33.821677 1018 log.go:172] (0xc000116840) (0xc0006a0320) Stream removed, broadcasting: 5\n" Apr 24 13:28:33.827: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:28:33.827: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:28:33.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:28:34.042: INFO: stderr: "I0424 13:28:33.964166 1038 log.go:172] (0xc00056a0b0) (0xc0007f08c0) Create stream\nI0424 13:28:33.964383 1038 log.go:172] (0xc00056a0b0) (0xc0007f08c0) Stream added, broadcasting: 1\nI0424 13:28:33.967720 1038 log.go:172] (0xc00056a0b0) Reply frame received for 1\nI0424 13:28:33.967768 1038 log.go:172] (0xc00056a0b0) (0xc0007f0960) Create stream\nI0424 13:28:33.967786 1038 log.go:172] (0xc00056a0b0) (0xc0007f0960) Stream added, broadcasting: 3\nI0424 13:28:33.968802 1038 log.go:172] (0xc00056a0b0) Reply frame received for 3\nI0424 13:28:33.968851 1038 log.go:172] (0xc00056a0b0) (0xc000322000) Create stream\nI0424 13:28:33.968877 1038 log.go:172] (0xc00056a0b0) (0xc000322000) Stream added, broadcasting: 5\nI0424 13:28:33.970081 1038 log.go:172] (0xc00056a0b0) Reply frame received for 5\nI0424 13:28:34.034658 1038 log.go:172] (0xc00056a0b0) Data frame received for 3\nI0424 13:28:34.034707 1038 log.go:172] (0xc0007f0960) (3) Data frame handling\nI0424 13:28:34.034722 1038 log.go:172] (0xc0007f0960) (3) Data frame sent\nI0424 13:28:34.034733 1038 log.go:172] (0xc00056a0b0) Data frame received for 3\nI0424 13:28:34.034743 1038 log.go:172] (0xc0007f0960) (3) Data frame handling\nI0424 13:28:34.034782 1038 log.go:172] (0xc00056a0b0) Data frame received for 5\nI0424 13:28:34.034801 1038 log.go:172] (0xc000322000) (5) Data frame handling\nI0424 13:28:34.034828 1038 log.go:172] (0xc000322000) (5) Data frame sent\nI0424 13:28:34.034852 1038 log.go:172] (0xc00056a0b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0424 13:28:34.034875 1038 log.go:172] (0xc000322000) (5) Data frame handling\nI0424 13:28:34.036582 1038 log.go:172] (0xc00056a0b0) Data frame received for 1\nI0424 13:28:34.036613 1038 log.go:172] (0xc0007f08c0) (1) Data frame handling\nI0424 13:28:34.036635 1038 log.go:172] (0xc0007f08c0) (1) Data frame sent\nI0424 13:28:34.036650 1038 log.go:172] (0xc00056a0b0) (0xc0007f08c0) Stream removed, broadcasting: 1\nI0424 13:28:34.036842 1038 log.go:172] (0xc00056a0b0) Go away received\nI0424 13:28:34.036941 1038 log.go:172] (0xc00056a0b0) (0xc0007f08c0) Stream removed, broadcasting: 1\nI0424 13:28:34.036960 1038 log.go:172] (0xc00056a0b0) (0xc0007f0960) Stream removed, broadcasting: 3\nI0424 13:28:34.036971 1038 log.go:172] (0xc00056a0b0) (0xc000322000) Stream removed, broadcasting: 5\n" Apr 24 13:28:34.042: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:28:34.042: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:28:34.046: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 24 13:28:44.050: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:28:44.050: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:28:44.050: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 24 13:28:44.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:28:44.278: INFO: stderr: "I0424 13:28:44.173595 1059 log.go:172] (0xc0006a4420) (0xc00057c640) Create stream\nI0424 13:28:44.173650 1059 log.go:172] (0xc0006a4420) (0xc00057c640) Stream added, broadcasting: 1\nI0424 13:28:44.175856 1059 log.go:172] (0xc0006a4420) Reply frame received for 1\nI0424 13:28:44.175918 1059 log.go:172] (0xc0006a4420) (0xc0005581e0) Create stream\nI0424 13:28:44.175935 1059 log.go:172] (0xc0006a4420) (0xc0005581e0) Stream added, broadcasting: 3\nI0424 13:28:44.177072 1059 log.go:172] (0xc0006a4420) Reply frame received for 3\nI0424 13:28:44.177104 1059 log.go:172] (0xc0006a4420) (0xc00057c6e0) Create stream\nI0424 13:28:44.177234 1059 log.go:172] (0xc0006a4420) (0xc00057c6e0) Stream added, broadcasting: 5\nI0424 13:28:44.178359 1059 log.go:172] (0xc0006a4420) Reply frame received for 5\nI0424 13:28:44.270712 1059 log.go:172] (0xc0006a4420) Data frame received for 3\nI0424 13:28:44.270751 1059 log.go:172] (0xc0005581e0) (3) Data frame handling\nI0424 13:28:44.270765 1059 log.go:172] (0xc0005581e0) (3) Data frame sent\nI0424 13:28:44.270794 1059 log.go:172] (0xc0006a4420) Data frame received for 5\nI0424 13:28:44.270805 1059 log.go:172] (0xc00057c6e0) (5) Data frame handling\nI0424 13:28:44.270815 1059 log.go:172] (0xc00057c6e0) (5) Data frame sent\nI0424 13:28:44.270826 1059 log.go:172] (0xc0006a4420) Data frame received for 5\nI0424 13:28:44.270835 1059 log.go:172] (0xc00057c6e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:28:44.270948 1059 log.go:172] (0xc0006a4420) Data frame received for 3\nI0424 13:28:44.270988 1059 log.go:172] (0xc0005581e0) (3) Data frame handling\nI0424 13:28:44.272459 1059 log.go:172] (0xc0006a4420) Data frame received for 1\nI0424 13:28:44.272486 1059 log.go:172] (0xc00057c640) (1) Data frame handling\nI0424 13:28:44.272513 1059 log.go:172] (0xc00057c640) (1) Data frame sent\nI0424 13:28:44.272677 1059 log.go:172] (0xc0006a4420) (0xc00057c640) Stream removed, broadcasting: 1\nI0424 13:28:44.272793 1059 log.go:172] (0xc0006a4420) Go away received\nI0424 13:28:44.273099 1059 log.go:172] (0xc0006a4420) (0xc00057c640) Stream removed, broadcasting: 1\nI0424 13:28:44.273274 1059 log.go:172] (0xc0006a4420) (0xc0005581e0) Stream removed, broadcasting: 3\nI0424 13:28:44.273293 1059 log.go:172] (0xc0006a4420) (0xc00057c6e0) Stream removed, broadcasting: 5\n" Apr 24 13:28:44.278: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:28:44.278: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:28:44.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:28:44.506: INFO: stderr: "I0424 13:28:44.398470 1078 log.go:172] (0xc00059e0b0) (0xc0007e0640) Create stream\nI0424 13:28:44.398517 1078 log.go:172] (0xc00059e0b0) (0xc0007e0640) Stream added, broadcasting: 1\nI0424 13:28:44.400815 1078 log.go:172] (0xc00059e0b0) Reply frame received for 1\nI0424 13:28:44.400853 1078 log.go:172] (0xc00059e0b0) (0xc000654320) Create stream\nI0424 13:28:44.400868 1078 log.go:172] (0xc00059e0b0) (0xc000654320) Stream added, broadcasting: 3\nI0424 13:28:44.401914 1078 log.go:172] (0xc00059e0b0) Reply frame received for 3\nI0424 13:28:44.401960 1078 log.go:172] (0xc00059e0b0) (0xc0007e06e0) Create stream\nI0424 13:28:44.401992 1078 log.go:172] (0xc00059e0b0) (0xc0007e06e0) Stream added, broadcasting: 5\nI0424 13:28:44.403079 1078 log.go:172] (0xc00059e0b0) Reply frame received for 5\nI0424 13:28:44.467025 1078 log.go:172] (0xc00059e0b0) Data frame received for 5\nI0424 13:28:44.467058 1078 log.go:172] (0xc0007e06e0) (5) Data frame handling\nI0424 13:28:44.467078 1078 log.go:172] (0xc0007e06e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:28:44.498079 1078 log.go:172] (0xc00059e0b0) Data frame received for 3\nI0424 13:28:44.498120 1078 log.go:172] (0xc000654320) (3) Data frame handling\nI0424 13:28:44.498150 1078 log.go:172] (0xc000654320) (3) Data frame sent\nI0424 13:28:44.498165 1078 log.go:172] (0xc00059e0b0) Data frame received for 3\nI0424 13:28:44.498186 1078 log.go:172] (0xc000654320) (3) Data frame handling\nI0424 13:28:44.498353 1078 log.go:172] (0xc00059e0b0) Data frame received for 5\nI0424 13:28:44.498374 1078 log.go:172] (0xc0007e06e0) (5) Data frame handling\nI0424 13:28:44.499965 1078 log.go:172] (0xc00059e0b0) Data frame received for 1\nI0424 13:28:44.499991 1078 log.go:172] (0xc0007e0640) (1) Data frame handling\nI0424 13:28:44.500001 1078 log.go:172] (0xc0007e0640) (1) Data frame sent\nI0424 13:28:44.500013 1078 log.go:172] (0xc00059e0b0) (0xc0007e0640) Stream removed, broadcasting: 1\nI0424 13:28:44.500045 1078 log.go:172] (0xc00059e0b0) Go away received\nI0424 13:28:44.500401 1078 log.go:172] (0xc00059e0b0) (0xc0007e0640) Stream removed, broadcasting: 1\nI0424 13:28:44.500416 1078 log.go:172] (0xc00059e0b0) (0xc000654320) Stream removed, broadcasting: 3\nI0424 13:28:44.500425 1078 log.go:172] (0xc00059e0b0) (0xc0007e06e0) Stream removed, broadcasting: 5\n" Apr 24 13:28:44.506: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:28:44.506: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:28:44.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8388 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:28:44.758: INFO: stderr: "I0424 13:28:44.637326 1099 log.go:172] (0xc0009b8420) (0xc0003706e0) Create stream\nI0424 13:28:44.637394 1099 log.go:172] (0xc0009b8420) (0xc0003706e0) Stream added, broadcasting: 1\nI0424 13:28:44.639749 1099 log.go:172] (0xc0009b8420) Reply frame received for 1\nI0424 13:28:44.639806 1099 log.go:172] (0xc0009b8420) (0xc000a08000) Create stream\nI0424 13:28:44.640367 1099 log.go:172] (0xc0009b8420) (0xc000a08000) Stream added, broadcasting: 3\nI0424 13:28:44.642695 1099 log.go:172] (0xc0009b8420) Reply frame received for 3\nI0424 13:28:44.642763 1099 log.go:172] (0xc0009b8420) (0xc000370000) Create stream\nI0424 13:28:44.642784 1099 log.go:172] (0xc0009b8420) (0xc000370000) Stream added, broadcasting: 5\nI0424 13:28:44.643942 1099 log.go:172] (0xc0009b8420) Reply frame received for 5\nI0424 13:28:44.717305 1099 log.go:172] (0xc0009b8420) Data frame received for 5\nI0424 13:28:44.717354 1099 log.go:172] (0xc000370000) (5) Data frame handling\nI0424 13:28:44.717379 1099 log.go:172] (0xc000370000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:28:44.749500 1099 log.go:172] (0xc0009b8420) Data frame received for 3\nI0424 13:28:44.749537 1099 log.go:172] (0xc000a08000) (3) Data frame handling\nI0424 13:28:44.749551 1099 log.go:172] (0xc000a08000) (3) Data frame sent\nI0424 13:28:44.749563 1099 log.go:172] (0xc0009b8420) Data frame received for 3\nI0424 13:28:44.749577 1099 log.go:172] (0xc000a08000) (3) Data frame handling\nI0424 13:28:44.749999 1099 log.go:172] (0xc0009b8420) Data frame received for 5\nI0424 13:28:44.750026 1099 log.go:172] (0xc000370000) (5) Data frame handling\nI0424 13:28:44.751673 1099 log.go:172] (0xc0009b8420) Data frame received for 1\nI0424 13:28:44.751704 1099 log.go:172] (0xc0003706e0) (1) Data frame handling\nI0424 13:28:44.751727 1099 log.go:172] (0xc0003706e0) (1) Data frame sent\nI0424 13:28:44.751742 1099 log.go:172] (0xc0009b8420) (0xc0003706e0) Stream removed, broadcasting: 1\nI0424 13:28:44.751821 1099 log.go:172] (0xc0009b8420) Go away received\nI0424 13:28:44.752215 1099 log.go:172] (0xc0009b8420) (0xc0003706e0) Stream removed, broadcasting: 1\nI0424 13:28:44.752236 1099 log.go:172] (0xc0009b8420) (0xc000a08000) Stream removed, broadcasting: 3\nI0424 13:28:44.752248 1099 log.go:172] (0xc0009b8420) (0xc000370000) Stream removed, broadcasting: 5\n" Apr 24 13:28:44.758: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:28:44.758: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:28:44.758: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:28:44.762: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 24 13:28:54.807: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:28:54.807: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:28:54.807: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 24 13:28:54.844: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:54.844: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:54.844: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:54.844: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:54.844: INFO: Apr 24 13:28:54.844: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 13:28:55.849: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:55.849: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:55.849: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:55.849: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:55.849: INFO: Apr 24 13:28:55.849: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 13:28:56.855: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:56.855: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:56.855: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:56.855: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:56.855: INFO: Apr 24 13:28:56.855: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 13:28:57.880: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:57.880: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:57.880: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:57.880: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:57.880: INFO: Apr 24 13:28:57.880: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 13:28:58.885: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:58.885: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:58.885: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:58.885: INFO: Apr 24 13:28:58.885: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 24 13:28:59.890: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:28:59.890: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:28:59.890: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:28:59.890: INFO: Apr 24 13:28:59.890: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 24 13:29:00.894: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:29:00.894: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:03 +0000 UTC }] Apr 24 13:29:00.894: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:29:00.894: INFO: Apr 24 13:29:00.894: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 24 13:29:01.898: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 13:29:01.898: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:28:23 +0000 UTC }] Apr 24 13:29:01.898: INFO: Apr 24 13:29:01.898: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 24 13:29:02.903: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.916737337s Apr 24 13:29:03.906: INFO: Verifying statefulset ss doesn't scale past 0 for another 911.8646ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8388 Apr 24 13:29:04.910: INFO: Scaling statefulset ss to 0 Apr 24 13:29:04.917: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 24 13:29:04.920: INFO: Deleting all statefulset in ns statefulset-8388 Apr 24 13:29:04.922: INFO: Scaling statefulset ss to 0 Apr 24 13:29:04.929: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:29:04.931: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:29:04.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8388" for this suite. Apr 24 13:29:10.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:29:11.071: INFO: namespace statefulset-8388 deletion completed in 6.122257103s • [SLOW TEST:68.154 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:29:11.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-01902697-658e-4ad1-94d0-9b787f72cb82 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:29:11.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7587" for this suite. Apr 24 13:29:17.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:29:17.231: INFO: namespace secrets-7587 deletion completed in 6.084879477s • [SLOW TEST:6.159 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:29:17.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-aa93156a-f741-4616-ae91-4b0cb623c8be STEP: Creating a pod to test consume configMaps Apr 24 13:29:17.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732" in namespace "configmap-1658" to be "success or failure" Apr 24 13:29:17.332: INFO: Pod "pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299903ms Apr 24 13:29:19.335: INFO: Pod "pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006055692s Apr 24 13:29:21.340: INFO: Pod "pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010717529s STEP: Saw pod success Apr 24 13:29:21.340: INFO: Pod "pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732" satisfied condition "success or failure" Apr 24 13:29:21.343: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732 container configmap-volume-test: STEP: delete the pod Apr 24 13:29:21.415: INFO: Waiting for pod pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732 to disappear Apr 24 13:29:21.429: INFO: Pod pod-configmaps-dfdde9d2-140f-498b-a4cd-afa05e7ef732 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:29:21.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1658" for this suite. Apr 24 13:29:27.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:29:27.526: INFO: namespace configmap-1658 deletion completed in 6.092670331s • [SLOW TEST:10.294 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:29:27.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 13:29:30.624: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:29:30.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6073" for this suite. Apr 24 13:29:36.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:29:37.034: INFO: namespace container-runtime-6073 deletion completed in 6.286121935s • [SLOW TEST:9.507 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:29:37.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3767/configmap-test-8c151405-f6c2-42d6-88e3-b691bb80e632 STEP: Creating a pod to test consume configMaps Apr 24 13:29:37.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0" in namespace "configmap-3767" to be "success or failure" Apr 24 13:29:37.142: INFO: Pod "pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358759ms Apr 24 13:29:39.174: INFO: Pod "pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036213676s Apr 24 13:29:41.178: INFO: Pod "pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040576068s STEP: Saw pod success Apr 24 13:29:41.178: INFO: Pod "pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0" satisfied condition "success or failure" Apr 24 13:29:41.181: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0 container env-test: STEP: delete the pod Apr 24 13:29:41.201: INFO: Waiting for pod pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0 to disappear Apr 24 13:29:41.205: INFO: Pod pod-configmaps-7fb63d47-1bc5-481b-888a-b16e968c8cb0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:29:41.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3767" for this suite. Apr 24 13:29:47.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:29:47.345: INFO: namespace configmap-3767 deletion completed in 6.136906614s • [SLOW TEST:10.311 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:29:47.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc Apr 24 13:29:47.415: INFO: Pod name my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc: Found 0 pods out of 1 Apr 24 13:29:52.419: INFO: Pod name my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc: Found 1 pods out of 1 Apr 24 13:29:52.419: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc" are running Apr 24 13:29:52.423: INFO: Pod "my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc-qnf52" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:29:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:29:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:29:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 13:29:47 +0000 UTC Reason: Message:}]) Apr 24 13:29:52.423: INFO: Trying to dial the pod Apr 24 13:29:57.435: INFO: Controller my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc: Got expected result from replica 1 [my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc-qnf52]: "my-hostname-basic-34a2a3b1-a5ac-4c06-a07a-6f4e7a0840dc-qnf52", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:29:57.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4" for this suite. Apr 24 13:30:04.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:30:05.051: INFO: namespace replication-controller-4 deletion completed in 7.612957401s • [SLOW TEST:17.705 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:30:05.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:30:05.272: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 24 13:30:07.349: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:30:08.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5841" for this suite. Apr 24 13:30:14.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:30:14.508: INFO: namespace replication-controller-5841 deletion completed in 6.12324241s • [SLOW TEST:9.456 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:30:14.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 13:30:18.598: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:30:18.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5227" for this suite. Apr 24 13:30:24.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:30:24.724: INFO: namespace container-runtime-5227 deletion completed in 6.089677228s • [SLOW TEST:10.216 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:30:24.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0424 13:30:36.103092 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 13:30:36.103: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:30:36.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1741" for this suite. Apr 24 13:30:42.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:30:42.287: INFO: namespace gc-1741 deletion completed in 6.181605725s • [SLOW TEST:17.562 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:30:42.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5492 I0424 13:30:42.439806 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5492, replica count: 1 I0424 13:30:43.490286 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 13:30:44.490508 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 13:30:45.490723 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 13:30:46.490916 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 13:30:46.626: INFO: Created: latency-svc-mbgff Apr 24 13:30:46.647: INFO: Got endpoints: latency-svc-mbgff [56.819545ms] Apr 24 13:30:46.677: INFO: Created: latency-svc-db2lj Apr 24 13:30:46.743: INFO: Got endpoints: latency-svc-db2lj [94.803083ms] Apr 24 13:30:46.760: INFO: Created: latency-svc-927t4 Apr 24 13:30:46.790: INFO: Got endpoints: latency-svc-927t4 [142.645662ms] Apr 24 13:30:46.820: INFO: Created: latency-svc-54v6j Apr 24 13:30:46.834: INFO: Got endpoints: latency-svc-54v6j [185.471129ms] Apr 24 13:30:46.894: INFO: Created: latency-svc-wd97g Apr 24 13:30:46.900: INFO: Got endpoints: latency-svc-wd97g [251.041047ms] Apr 24 13:30:46.921: INFO: Created: latency-svc-79x4v Apr 24 13:30:46.936: INFO: Got endpoints: latency-svc-79x4v [286.967929ms] Apr 24 13:30:46.970: INFO: Created: latency-svc-n87n7 Apr 24 13:30:46.984: INFO: Got endpoints: latency-svc-n87n7 [335.028391ms] Apr 24 13:30:47.026: INFO: Created: latency-svc-n9wgb Apr 24 13:30:47.028: INFO: Got endpoints: latency-svc-n9wgb [378.575218ms] Apr 24 13:30:47.055: INFO: Created: latency-svc-5jsp5 Apr 24 13:30:47.066: INFO: Got endpoints: latency-svc-5jsp5 [416.314651ms] Apr 24 13:30:47.088: INFO: Created: latency-svc-8xfd6 Apr 24 13:30:47.102: INFO: Got endpoints: latency-svc-8xfd6 [451.979179ms] Apr 24 13:30:47.125: INFO: Created: latency-svc-t4lvf Apr 24 13:30:47.156: INFO: Got endpoints: latency-svc-t4lvf [505.811755ms] Apr 24 13:30:47.179: INFO: Created: latency-svc-jv9jz Apr 24 13:30:47.206: INFO: Got endpoints: latency-svc-jv9jz [554.498881ms] Apr 24 13:30:47.228: INFO: Created: latency-svc-kbhqt Apr 24 13:30:47.244: INFO: Got endpoints: latency-svc-kbhqt [593.054899ms] Apr 24 13:30:47.289: INFO: Created: latency-svc-kvqzd Apr 24 13:30:47.292: INFO: Got endpoints: latency-svc-kvqzd [640.165243ms] Apr 24 13:30:47.316: INFO: Created: latency-svc-ds9jw Apr 24 13:30:47.328: INFO: Got endpoints: latency-svc-ds9jw [677.656044ms] Apr 24 13:30:47.353: INFO: Created: latency-svc-nrdlb Apr 24 13:30:47.364: INFO: Got endpoints: latency-svc-nrdlb [712.433488ms] Apr 24 13:30:47.423: INFO: Created: latency-svc-9pxjm Apr 24 13:30:47.443: INFO: Got endpoints: latency-svc-9pxjm [700.430049ms] Apr 24 13:30:47.445: INFO: Created: latency-svc-kk46n Apr 24 13:30:47.468: INFO: Got endpoints: latency-svc-kk46n [677.310057ms] Apr 24 13:30:47.498: INFO: Created: latency-svc-4f7jn Apr 24 13:30:47.507: INFO: Got endpoints: latency-svc-4f7jn [672.871293ms] Apr 24 13:30:47.554: INFO: Created: latency-svc-tfsl5 Apr 24 13:30:47.556: INFO: Got endpoints: latency-svc-tfsl5 [655.642605ms] Apr 24 13:30:47.623: INFO: Created: latency-svc-646kp Apr 24 13:30:47.639: INFO: Got endpoints: latency-svc-646kp [702.492265ms] Apr 24 13:30:47.678: INFO: Created: latency-svc-8d8pb Apr 24 13:30:47.680: INFO: Got endpoints: latency-svc-8d8pb [695.774479ms] Apr 24 13:30:47.702: INFO: Created: latency-svc-mfrbl Apr 24 13:30:47.711: INFO: Got endpoints: latency-svc-mfrbl [682.758727ms] Apr 24 13:30:47.737: INFO: Created: latency-svc-hhvtz Apr 24 13:30:47.754: INFO: Got endpoints: latency-svc-hhvtz [687.742462ms] Apr 24 13:30:47.804: INFO: Created: latency-svc-vdfh2 Apr 24 13:30:47.827: INFO: Got endpoints: latency-svc-vdfh2 [724.763559ms] Apr 24 13:30:47.829: INFO: Created: latency-svc-rwpwq Apr 24 13:30:47.835: INFO: Got endpoints: latency-svc-rwpwq [678.751976ms] Apr 24 13:30:47.858: INFO: Created: latency-svc-znxw9 Apr 24 13:30:47.872: INFO: Got endpoints: latency-svc-znxw9 [665.923478ms] Apr 24 13:30:47.894: INFO: Created: latency-svc-tzrqr Apr 24 13:30:47.929: INFO: Got endpoints: latency-svc-tzrqr [684.945541ms] Apr 24 13:30:47.959: INFO: Created: latency-svc-jv84p Apr 24 13:30:47.974: INFO: Got endpoints: latency-svc-jv84p [682.034842ms] Apr 24 13:30:47.994: INFO: Created: latency-svc-s4p5l Apr 24 13:30:48.024: INFO: Got endpoints: latency-svc-s4p5l [695.481948ms] Apr 24 13:30:48.025: INFO: Created: latency-svc-8m7l6 Apr 24 13:30:48.060: INFO: Got endpoints: latency-svc-8m7l6 [695.827348ms] Apr 24 13:30:48.075: INFO: Created: latency-svc-7xwn4 Apr 24 13:30:48.101: INFO: Got endpoints: latency-svc-7xwn4 [657.457149ms] Apr 24 13:30:48.134: INFO: Created: latency-svc-8nzbv Apr 24 13:30:48.150: INFO: Got endpoints: latency-svc-8nzbv [681.993936ms] Apr 24 13:30:48.211: INFO: Created: latency-svc-jgn2k Apr 24 13:30:48.241: INFO: Created: latency-svc-j5r2z Apr 24 13:30:48.241: INFO: Got endpoints: latency-svc-jgn2k [734.134577ms] Apr 24 13:30:48.296: INFO: Got endpoints: latency-svc-j5r2z [740.053201ms] Apr 24 13:30:48.365: INFO: Created: latency-svc-wr9tp Apr 24 13:30:48.377: INFO: Got endpoints: latency-svc-wr9tp [738.447824ms] Apr 24 13:30:48.402: INFO: Created: latency-svc-7k8bl Apr 24 13:30:48.420: INFO: Got endpoints: latency-svc-7k8bl [739.770138ms] Apr 24 13:30:48.493: INFO: Created: latency-svc-h6chm Apr 24 13:30:48.498: INFO: Got endpoints: latency-svc-h6chm [787.20579ms] Apr 24 13:30:48.530: INFO: Created: latency-svc-9bdm6 Apr 24 13:30:48.547: INFO: Got endpoints: latency-svc-9bdm6 [792.375293ms] Apr 24 13:30:48.566: INFO: Created: latency-svc-kg5mt Apr 24 13:30:48.583: INFO: Got endpoints: latency-svc-kg5mt [755.551675ms] Apr 24 13:30:48.618: INFO: Created: latency-svc-qb2k9 Apr 24 13:30:48.622: INFO: Got endpoints: latency-svc-qb2k9 [786.739254ms] Apr 24 13:30:48.649: INFO: Created: latency-svc-zr425 Apr 24 13:30:48.664: INFO: Got endpoints: latency-svc-zr425 [792.678437ms] Apr 24 13:30:48.685: INFO: Created: latency-svc-blwg5 Apr 24 13:30:48.700: INFO: Got endpoints: latency-svc-blwg5 [770.970841ms] Apr 24 13:30:48.785: INFO: Created: latency-svc-56xfz Apr 24 13:30:48.789: INFO: Got endpoints: latency-svc-56xfz [815.030613ms] Apr 24 13:30:48.830: INFO: Created: latency-svc-5ss6b Apr 24 13:30:48.839: INFO: Got endpoints: latency-svc-5ss6b [814.527301ms] Apr 24 13:30:48.864: INFO: Created: latency-svc-7v7bv Apr 24 13:30:48.923: INFO: Got endpoints: latency-svc-7v7bv [862.309121ms] Apr 24 13:30:48.932: INFO: Created: latency-svc-vrn2h Apr 24 13:30:48.950: INFO: Got endpoints: latency-svc-vrn2h [848.709025ms] Apr 24 13:30:48.975: INFO: Created: latency-svc-hgqch Apr 24 13:30:48.983: INFO: Got endpoints: latency-svc-hgqch [833.399483ms] Apr 24 13:30:49.005: INFO: Created: latency-svc-dx46f Apr 24 13:30:49.015: INFO: Got endpoints: latency-svc-dx46f [773.483708ms] Apr 24 13:30:49.086: INFO: Created: latency-svc-q87dd Apr 24 13:30:49.092: INFO: Got endpoints: latency-svc-q87dd [796.673571ms] Apr 24 13:30:49.110: INFO: Created: latency-svc-ph9hl Apr 24 13:30:49.122: INFO: Got endpoints: latency-svc-ph9hl [744.583715ms] Apr 24 13:30:49.142: INFO: Created: latency-svc-rs5z9 Apr 24 13:30:49.154: INFO: Got endpoints: latency-svc-rs5z9 [733.973967ms] Apr 24 13:30:49.178: INFO: Created: latency-svc-vlx8z Apr 24 13:30:49.228: INFO: Got endpoints: latency-svc-vlx8z [730.084472ms] Apr 24 13:30:49.248: INFO: Created: latency-svc-gzz6w Apr 24 13:30:49.313: INFO: Got endpoints: latency-svc-gzz6w [766.557959ms] Apr 24 13:30:49.380: INFO: Created: latency-svc-qhwqs Apr 24 13:30:49.404: INFO: Got endpoints: latency-svc-qhwqs [821.582334ms] Apr 24 13:30:49.436: INFO: Created: latency-svc-xpf5l Apr 24 13:30:49.450: INFO: Got endpoints: latency-svc-xpf5l [827.567887ms] Apr 24 13:30:49.470: INFO: Created: latency-svc-9hqrn Apr 24 13:30:49.527: INFO: Got endpoints: latency-svc-9hqrn [862.867349ms] Apr 24 13:30:49.533: INFO: Created: latency-svc-dnf4q Apr 24 13:30:49.540: INFO: Got endpoints: latency-svc-dnf4q [839.244944ms] Apr 24 13:30:49.561: INFO: Created: latency-svc-qn2fg Apr 24 13:30:49.582: INFO: Got endpoints: latency-svc-qn2fg [793.366177ms] Apr 24 13:30:49.605: INFO: Created: latency-svc-dzn6w Apr 24 13:30:49.621: INFO: Got endpoints: latency-svc-dzn6w [782.436854ms] Apr 24 13:30:49.672: INFO: Created: latency-svc-wzqds Apr 24 13:30:49.674: INFO: Got endpoints: latency-svc-wzqds [751.720304ms] Apr 24 13:30:49.725: INFO: Created: latency-svc-jfhmt Apr 24 13:30:49.738: INFO: Got endpoints: latency-svc-jfhmt [788.578167ms] Apr 24 13:30:49.761: INFO: Created: latency-svc-fh7sw Apr 24 13:30:49.769: INFO: Got endpoints: latency-svc-fh7sw [785.183352ms] Apr 24 13:30:49.815: INFO: Created: latency-svc-wjpjr Apr 24 13:30:49.817: INFO: Got endpoints: latency-svc-wjpjr [802.869844ms] Apr 24 13:30:49.838: INFO: Created: latency-svc-zxwbr Apr 24 13:30:49.853: INFO: Got endpoints: latency-svc-zxwbr [760.874336ms] Apr 24 13:30:49.873: INFO: Created: latency-svc-drkzs Apr 24 13:30:49.889: INFO: Got endpoints: latency-svc-drkzs [767.117119ms] Apr 24 13:30:49.909: INFO: Created: latency-svc-mj5jg Apr 24 13:30:49.946: INFO: Got endpoints: latency-svc-mj5jg [792.445964ms] Apr 24 13:30:49.964: INFO: Created: latency-svc-7qx68 Apr 24 13:30:49.980: INFO: Got endpoints: latency-svc-7qx68 [751.412919ms] Apr 24 13:30:50.007: INFO: Created: latency-svc-cvwd9 Apr 24 13:30:50.016: INFO: Got endpoints: latency-svc-cvwd9 [703.031793ms] Apr 24 13:30:50.041: INFO: Created: latency-svc-bqg29 Apr 24 13:30:50.115: INFO: Got endpoints: latency-svc-bqg29 [710.452459ms] Apr 24 13:30:50.116: INFO: Created: latency-svc-hh5md Apr 24 13:30:50.124: INFO: Got endpoints: latency-svc-hh5md [108.045631ms] Apr 24 13:30:50.151: INFO: Created: latency-svc-c9lmq Apr 24 13:30:50.167: INFO: Got endpoints: latency-svc-c9lmq [717.227115ms] Apr 24 13:30:50.186: INFO: Created: latency-svc-q2gvg Apr 24 13:30:50.252: INFO: Got endpoints: latency-svc-q2gvg [724.750008ms] Apr 24 13:30:50.268: INFO: Created: latency-svc-sjbpx Apr 24 13:30:50.328: INFO: Got endpoints: latency-svc-sjbpx [788.608516ms] Apr 24 13:30:50.403: INFO: Created: latency-svc-62xmz Apr 24 13:30:50.419: INFO: Got endpoints: latency-svc-62xmz [836.842528ms] Apr 24 13:30:50.444: INFO: Created: latency-svc-n6bw6 Apr 24 13:30:50.474: INFO: Got endpoints: latency-svc-n6bw6 [852.546501ms] Apr 24 13:30:50.540: INFO: Created: latency-svc-h4vjk Apr 24 13:30:50.569: INFO: Got endpoints: latency-svc-h4vjk [894.325339ms] Apr 24 13:30:50.569: INFO: Created: latency-svc-mb252 Apr 24 13:30:50.583: INFO: Got endpoints: latency-svc-mb252 [844.457055ms] Apr 24 13:30:50.606: INFO: Created: latency-svc-w8wx7 Apr 24 13:30:50.624: INFO: Got endpoints: latency-svc-w8wx7 [855.535313ms] Apr 24 13:30:50.678: INFO: Created: latency-svc-sgc26 Apr 24 13:30:50.684: INFO: Got endpoints: latency-svc-sgc26 [866.440241ms] Apr 24 13:30:50.713: INFO: Created: latency-svc-62mlh Apr 24 13:30:50.754: INFO: Got endpoints: latency-svc-62mlh [900.728603ms] Apr 24 13:30:50.821: INFO: Created: latency-svc-gxwqh Apr 24 13:30:50.862: INFO: Got endpoints: latency-svc-gxwqh [972.48138ms] Apr 24 13:30:50.901: INFO: Created: latency-svc-rdxzw Apr 24 13:30:50.913: INFO: Got endpoints: latency-svc-rdxzw [966.456804ms] Apr 24 13:30:50.965: INFO: Created: latency-svc-dhnrx Apr 24 13:30:50.979: INFO: Got endpoints: latency-svc-dhnrx [999.499874ms] Apr 24 13:30:51.001: INFO: Created: latency-svc-tn5dz Apr 24 13:30:51.016: INFO: Got endpoints: latency-svc-tn5dz [900.736015ms] Apr 24 13:30:51.039: INFO: Created: latency-svc-6lppg Apr 24 13:30:51.052: INFO: Got endpoints: latency-svc-6lppg [927.898187ms] Apr 24 13:30:51.091: INFO: Created: latency-svc-dzzdt Apr 24 13:30:51.116: INFO: Got endpoints: latency-svc-dzzdt [949.190004ms] Apr 24 13:30:51.117: INFO: Created: latency-svc-5xjt4 Apr 24 13:30:51.131: INFO: Got endpoints: latency-svc-5xjt4 [879.133668ms] Apr 24 13:30:51.150: INFO: Created: latency-svc-slxmt Apr 24 13:30:51.167: INFO: Got endpoints: latency-svc-slxmt [838.836408ms] Apr 24 13:30:51.229: INFO: Created: latency-svc-q66d2 Apr 24 13:30:51.232: INFO: Got endpoints: latency-svc-q66d2 [812.995165ms] Apr 24 13:30:51.279: INFO: Created: latency-svc-cr5bt Apr 24 13:30:51.288: INFO: Got endpoints: latency-svc-cr5bt [813.81373ms] Apr 24 13:30:51.314: INFO: Created: latency-svc-k8bbv Apr 24 13:30:51.325: INFO: Got endpoints: latency-svc-k8bbv [756.102666ms] Apr 24 13:30:51.378: INFO: Created: latency-svc-w6rx9 Apr 24 13:30:51.396: INFO: Got endpoints: latency-svc-w6rx9 [813.305171ms] Apr 24 13:30:51.415: INFO: Created: latency-svc-xdpbc Apr 24 13:30:51.447: INFO: Got endpoints: latency-svc-xdpbc [822.303842ms] Apr 24 13:30:51.504: INFO: Created: latency-svc-n7qg6 Apr 24 13:30:51.528: INFO: Created: latency-svc-gp786 Apr 24 13:30:51.529: INFO: Got endpoints: latency-svc-n7qg6 [844.614853ms] Apr 24 13:30:51.558: INFO: Got endpoints: latency-svc-gp786 [804.238803ms] Apr 24 13:30:51.598: INFO: Created: latency-svc-hrg4z Apr 24 13:30:51.653: INFO: Got endpoints: latency-svc-hrg4z [791.36157ms] Apr 24 13:30:51.657: INFO: Created: latency-svc-8bx7q Apr 24 13:30:51.661: INFO: Got endpoints: latency-svc-8bx7q [748.036249ms] Apr 24 13:30:51.698: INFO: Created: latency-svc-6dh64 Apr 24 13:30:51.710: INFO: Got endpoints: latency-svc-6dh64 [730.369174ms] Apr 24 13:30:51.733: INFO: Created: latency-svc-2kq4b Apr 24 13:30:51.745: INFO: Got endpoints: latency-svc-2kq4b [729.76685ms] Apr 24 13:30:51.785: INFO: Created: latency-svc-4x2j6 Apr 24 13:30:51.788: INFO: Got endpoints: latency-svc-4x2j6 [735.240257ms] Apr 24 13:30:51.818: INFO: Created: latency-svc-bnlfk Apr 24 13:30:51.830: INFO: Got endpoints: latency-svc-bnlfk [713.657915ms] Apr 24 13:30:51.848: INFO: Created: latency-svc-d9x4v Apr 24 13:30:51.866: INFO: Got endpoints: latency-svc-d9x4v [735.106015ms] Apr 24 13:30:51.884: INFO: Created: latency-svc-whb7w Apr 24 13:30:51.941: INFO: Got endpoints: latency-svc-whb7w [773.294802ms] Apr 24 13:30:51.961: INFO: Created: latency-svc-84wcq Apr 24 13:30:51.985: INFO: Got endpoints: latency-svc-84wcq [752.260297ms] Apr 24 13:30:52.015: INFO: Created: latency-svc-6pmwd Apr 24 13:30:52.085: INFO: Got endpoints: latency-svc-6pmwd [796.91644ms] Apr 24 13:30:52.100: INFO: Created: latency-svc-8nhzl Apr 24 13:30:52.113: INFO: Got endpoints: latency-svc-8nhzl [787.552366ms] Apr 24 13:30:52.136: INFO: Created: latency-svc-sckrb Apr 24 13:30:52.149: INFO: Got endpoints: latency-svc-sckrb [753.055923ms] Apr 24 13:30:52.171: INFO: Created: latency-svc-dmr4j Apr 24 13:30:52.241: INFO: Got endpoints: latency-svc-dmr4j [794.135575ms] Apr 24 13:30:52.243: INFO: Created: latency-svc-72dhk Apr 24 13:30:52.270: INFO: Got endpoints: latency-svc-72dhk [740.800444ms] Apr 24 13:30:52.292: INFO: Created: latency-svc-dzf6w Apr 24 13:30:52.318: INFO: Got endpoints: latency-svc-dzf6w [759.089578ms] Apr 24 13:30:52.390: INFO: Created: latency-svc-rdlkg Apr 24 13:30:52.392: INFO: Got endpoints: latency-svc-rdlkg [739.065878ms] Apr 24 13:30:52.458: INFO: Created: latency-svc-5d68f Apr 24 13:30:52.474: INFO: Got endpoints: latency-svc-5d68f [812.787884ms] Apr 24 13:30:52.534: INFO: Created: latency-svc-jjmnm Apr 24 13:30:52.537: INFO: Got endpoints: latency-svc-jjmnm [827.09382ms] Apr 24 13:30:52.568: INFO: Created: latency-svc-bl826 Apr 24 13:30:52.582: INFO: Got endpoints: latency-svc-bl826 [836.468159ms] Apr 24 13:30:52.604: INFO: Created: latency-svc-rgp9v Apr 24 13:30:52.619: INFO: Got endpoints: latency-svc-rgp9v [830.894625ms] Apr 24 13:30:52.684: INFO: Created: latency-svc-6s2b9 Apr 24 13:30:52.689: INFO: Got endpoints: latency-svc-6s2b9 [859.282806ms] Apr 24 13:30:52.753: INFO: Created: latency-svc-mpw2f Apr 24 13:30:52.778: INFO: Got endpoints: latency-svc-mpw2f [911.170962ms] Apr 24 13:30:52.860: INFO: Created: latency-svc-t48qh Apr 24 13:30:52.897: INFO: Got endpoints: latency-svc-t48qh [956.378773ms] Apr 24 13:30:52.927: INFO: Created: latency-svc-6m6sv Apr 24 13:30:52.938: INFO: Got endpoints: latency-svc-6m6sv [952.693415ms] Apr 24 13:30:52.995: INFO: Created: latency-svc-wr6j5 Apr 24 13:30:52.998: INFO: Got endpoints: latency-svc-wr6j5 [913.789939ms] Apr 24 13:30:53.024: INFO: Created: latency-svc-c7zns Apr 24 13:30:53.041: INFO: Got endpoints: latency-svc-c7zns [928.14966ms] Apr 24 13:30:53.060: INFO: Created: latency-svc-bpzpc Apr 24 13:30:53.071: INFO: Got endpoints: latency-svc-bpzpc [921.223603ms] Apr 24 13:30:53.090: INFO: Created: latency-svc-n9xsr Apr 24 13:30:53.126: INFO: Got endpoints: latency-svc-n9xsr [885.225486ms] Apr 24 13:30:53.137: INFO: Created: latency-svc-5wglp Apr 24 13:30:53.150: INFO: Got endpoints: latency-svc-5wglp [880.075861ms] Apr 24 13:30:53.185: INFO: Created: latency-svc-hh98g Apr 24 13:30:53.198: INFO: Got endpoints: latency-svc-hh98g [880.795642ms] Apr 24 13:30:53.222: INFO: Created: latency-svc-hdbgs Apr 24 13:30:53.270: INFO: Got endpoints: latency-svc-hdbgs [877.627491ms] Apr 24 13:30:53.288: INFO: Created: latency-svc-bdl6f Apr 24 13:30:53.300: INFO: Got endpoints: latency-svc-bdl6f [826.465947ms] Apr 24 13:30:53.325: INFO: Created: latency-svc-49h5t Apr 24 13:30:53.336: INFO: Got endpoints: latency-svc-49h5t [799.366773ms] Apr 24 13:30:53.364: INFO: Created: latency-svc-zng8v Apr 24 13:30:53.408: INFO: Got endpoints: latency-svc-zng8v [825.823655ms] Apr 24 13:30:53.418: INFO: Created: latency-svc-btwj5 Apr 24 13:30:53.433: INFO: Got endpoints: latency-svc-btwj5 [814.390267ms] Apr 24 13:30:53.460: INFO: Created: latency-svc-hgxpz Apr 24 13:30:53.476: INFO: Got endpoints: latency-svc-hgxpz [786.668794ms] Apr 24 13:30:53.540: INFO: Created: latency-svc-dtm4w Apr 24 13:30:53.543: INFO: Got endpoints: latency-svc-dtm4w [764.989356ms] Apr 24 13:30:53.604: INFO: Created: latency-svc-gxzpv Apr 24 13:30:53.701: INFO: Got endpoints: latency-svc-gxzpv [804.012539ms] Apr 24 13:30:53.703: INFO: Created: latency-svc-lqc7c Apr 24 13:30:53.710: INFO: Got endpoints: latency-svc-lqc7c [772.029704ms] Apr 24 13:30:53.738: INFO: Created: latency-svc-5dmjp Apr 24 13:30:53.746: INFO: Got endpoints: latency-svc-5dmjp [747.504747ms] Apr 24 13:30:53.774: INFO: Created: latency-svc-djf6z Apr 24 13:30:53.782: INFO: Got endpoints: latency-svc-djf6z [741.101179ms] Apr 24 13:30:53.834: INFO: Created: latency-svc-842dz Apr 24 13:30:53.850: INFO: Got endpoints: latency-svc-842dz [778.915969ms] Apr 24 13:30:53.880: INFO: Created: latency-svc-vcfmd Apr 24 13:30:53.897: INFO: Got endpoints: latency-svc-vcfmd [770.706699ms] Apr 24 13:30:53.918: INFO: Created: latency-svc-pb5v9 Apr 24 13:30:53.932: INFO: Got endpoints: latency-svc-pb5v9 [782.531816ms] Apr 24 13:30:53.983: INFO: Created: latency-svc-2bvf6 Apr 24 13:30:53.986: INFO: Got endpoints: latency-svc-2bvf6 [787.849814ms] Apr 24 13:30:54.018: INFO: Created: latency-svc-dmqct Apr 24 13:30:54.041: INFO: Got endpoints: latency-svc-dmqct [770.619319ms] Apr 24 13:30:54.060: INFO: Created: latency-svc-5xd7w Apr 24 13:30:54.078: INFO: Got endpoints: latency-svc-5xd7w [777.70994ms] Apr 24 13:30:54.127: INFO: Created: latency-svc-lfjz7 Apr 24 13:30:54.131: INFO: Got endpoints: latency-svc-lfjz7 [794.563871ms] Apr 24 13:30:54.158: INFO: Created: latency-svc-5295s Apr 24 13:30:54.174: INFO: Got endpoints: latency-svc-5295s [765.755211ms] Apr 24 13:30:54.203: INFO: Created: latency-svc-lxdfj Apr 24 13:30:54.276: INFO: Got endpoints: latency-svc-lxdfj [843.296697ms] Apr 24 13:30:54.278: INFO: Created: latency-svc-8cczq Apr 24 13:30:54.289: INFO: Got endpoints: latency-svc-8cczq [813.50661ms] Apr 24 13:30:54.350: INFO: Created: latency-svc-fnd92 Apr 24 13:30:54.426: INFO: Got endpoints: latency-svc-fnd92 [883.266312ms] Apr 24 13:30:54.456: INFO: Created: latency-svc-btzjz Apr 24 13:30:54.469: INFO: Got endpoints: latency-svc-btzjz [767.594504ms] Apr 24 13:30:54.492: INFO: Created: latency-svc-sw5n6 Apr 24 13:30:54.504: INFO: Got endpoints: latency-svc-sw5n6 [794.598546ms] Apr 24 13:30:54.570: INFO: Created: latency-svc-5nqqx Apr 24 13:30:54.576: INFO: Got endpoints: latency-svc-5nqqx [830.330423ms] Apr 24 13:30:54.596: INFO: Created: latency-svc-bxnf7 Apr 24 13:30:54.624: INFO: Got endpoints: latency-svc-bxnf7 [842.108824ms] Apr 24 13:30:54.654: INFO: Created: latency-svc-ststf Apr 24 13:30:54.667: INFO: Got endpoints: latency-svc-ststf [817.064473ms] Apr 24 13:30:54.715: INFO: Created: latency-svc-nvmhp Apr 24 13:30:54.721: INFO: Got endpoints: latency-svc-nvmhp [824.058948ms] Apr 24 13:30:54.746: INFO: Created: latency-svc-hdmqc Apr 24 13:30:54.770: INFO: Got endpoints: latency-svc-hdmqc [837.490368ms] Apr 24 13:30:54.794: INFO: Created: latency-svc-gj2zb Apr 24 13:30:54.812: INFO: Got endpoints: latency-svc-gj2zb [825.651769ms] Apr 24 13:30:54.863: INFO: Created: latency-svc-6k56v Apr 24 13:30:54.872: INFO: Got endpoints: latency-svc-6k56v [831.114438ms] Apr 24 13:30:54.944: INFO: Created: latency-svc-hwkxf Apr 24 13:30:54.962: INFO: Got endpoints: latency-svc-hwkxf [884.084468ms] Apr 24 13:30:55.043: INFO: Created: latency-svc-4vvjd Apr 24 13:30:55.045: INFO: Got endpoints: latency-svc-4vvjd [914.195315ms] Apr 24 13:30:55.092: INFO: Created: latency-svc-jzwr9 Apr 24 13:30:55.108: INFO: Got endpoints: latency-svc-jzwr9 [934.114543ms] Apr 24 13:30:55.134: INFO: Created: latency-svc-2vd7g Apr 24 13:30:55.162: INFO: Got endpoints: latency-svc-2vd7g [886.10396ms] Apr 24 13:30:55.177: INFO: Created: latency-svc-jdpqp Apr 24 13:30:55.192: INFO: Got endpoints: latency-svc-jdpqp [902.913425ms] Apr 24 13:30:55.244: INFO: Created: latency-svc-h57ks Apr 24 13:30:55.312: INFO: Got endpoints: latency-svc-h57ks [885.658097ms] Apr 24 13:30:55.320: INFO: Created: latency-svc-6zxhr Apr 24 13:30:55.337: INFO: Got endpoints: latency-svc-6zxhr [868.198905ms] Apr 24 13:30:55.369: INFO: Created: latency-svc-kwdzc Apr 24 13:30:55.385: INFO: Got endpoints: latency-svc-kwdzc [881.053334ms] Apr 24 13:30:55.406: INFO: Created: latency-svc-dxc7b Apr 24 13:30:55.462: INFO: Got endpoints: latency-svc-dxc7b [885.453448ms] Apr 24 13:30:55.463: INFO: Created: latency-svc-nmdbb Apr 24 13:30:55.469: INFO: Got endpoints: latency-svc-nmdbb [845.162573ms] Apr 24 13:30:55.488: INFO: Created: latency-svc-tv4sj Apr 24 13:30:55.506: INFO: Got endpoints: latency-svc-tv4sj [839.189317ms] Apr 24 13:30:55.544: INFO: Created: latency-svc-zfb4g Apr 24 13:30:55.561: INFO: Got endpoints: latency-svc-zfb4g [839.648712ms] Apr 24 13:30:55.606: INFO: Created: latency-svc-lhr5b Apr 24 13:30:55.609: INFO: Got endpoints: latency-svc-lhr5b [838.655045ms] Apr 24 13:30:55.658: INFO: Created: latency-svc-rc6dz Apr 24 13:30:55.675: INFO: Got endpoints: latency-svc-rc6dz [862.738655ms] Apr 24 13:30:55.699: INFO: Created: latency-svc-zjx4f Apr 24 13:30:55.731: INFO: Got endpoints: latency-svc-zjx4f [859.033452ms] Apr 24 13:30:55.746: INFO: Created: latency-svc-7vl4f Apr 24 13:30:55.759: INFO: Got endpoints: latency-svc-7vl4f [796.638049ms] Apr 24 13:30:55.788: INFO: Created: latency-svc-8h2kq Apr 24 13:30:55.801: INFO: Got endpoints: latency-svc-8h2kq [755.771257ms] Apr 24 13:30:55.819: INFO: Created: latency-svc-57qhv Apr 24 13:30:55.857: INFO: Got endpoints: latency-svc-57qhv [749.300143ms] Apr 24 13:30:55.868: INFO: Created: latency-svc-dknhw Apr 24 13:30:55.880: INFO: Got endpoints: latency-svc-dknhw [716.96043ms] Apr 24 13:30:55.898: INFO: Created: latency-svc-cs9cw Apr 24 13:30:55.926: INFO: Got endpoints: latency-svc-cs9cw [733.612622ms] Apr 24 13:30:55.956: INFO: Created: latency-svc-42xgm Apr 24 13:30:56.024: INFO: Got endpoints: latency-svc-42xgm [712.632098ms] Apr 24 13:30:56.027: INFO: Created: latency-svc-nwsjs Apr 24 13:30:56.036: INFO: Got endpoints: latency-svc-nwsjs [699.126466ms] Apr 24 13:30:56.059: INFO: Created: latency-svc-b8lpj Apr 24 13:30:56.072: INFO: Got endpoints: latency-svc-b8lpj [686.291239ms] Apr 24 13:30:56.096: INFO: Created: latency-svc-d7xnz Apr 24 13:30:56.108: INFO: Got endpoints: latency-svc-d7xnz [646.237555ms] Apr 24 13:30:56.173: INFO: Created: latency-svc-zdm4g Apr 24 13:30:56.176: INFO: Got endpoints: latency-svc-zdm4g [706.417583ms] Apr 24 13:30:56.202: INFO: Created: latency-svc-8k4zz Apr 24 13:30:56.210: INFO: Got endpoints: latency-svc-8k4zz [703.999ms] Apr 24 13:30:56.239: INFO: Created: latency-svc-cl9gj Apr 24 13:30:56.258: INFO: Got endpoints: latency-svc-cl9gj [697.454951ms] Apr 24 13:30:56.337: INFO: Created: latency-svc-kwlzx Apr 24 13:30:56.342: INFO: Got endpoints: latency-svc-kwlzx [733.886093ms] Apr 24 13:30:56.400: INFO: Created: latency-svc-6l5pk Apr 24 13:30:56.424: INFO: Got endpoints: latency-svc-6l5pk [749.248704ms] Apr 24 13:30:56.504: INFO: Created: latency-svc-snr9g Apr 24 13:30:56.517: INFO: Got endpoints: latency-svc-snr9g [786.184928ms] Apr 24 13:30:56.545: INFO: Created: latency-svc-55tjg Apr 24 13:30:56.559: INFO: Got endpoints: latency-svc-55tjg [800.259187ms] Apr 24 13:30:56.580: INFO: Created: latency-svc-fvrzb Apr 24 13:30:56.596: INFO: Got endpoints: latency-svc-fvrzb [794.349709ms] Apr 24 13:30:56.642: INFO: Created: latency-svc-tjd9b Apr 24 13:30:56.658: INFO: Got endpoints: latency-svc-tjd9b [800.369393ms] Apr 24 13:30:56.690: INFO: Created: latency-svc-w9fgr Apr 24 13:30:56.704: INFO: Got endpoints: latency-svc-w9fgr [824.292883ms] Apr 24 13:30:56.725: INFO: Created: latency-svc-dqqrl Apr 24 13:30:56.740: INFO: Got endpoints: latency-svc-dqqrl [814.291355ms] Apr 24 13:30:56.803: INFO: Created: latency-svc-bfmrn Apr 24 13:30:56.806: INFO: Got endpoints: latency-svc-bfmrn [781.759178ms] Apr 24 13:30:56.833: INFO: Created: latency-svc-lhgmh Apr 24 13:30:56.862: INFO: Got endpoints: latency-svc-lhgmh [825.68392ms] Apr 24 13:30:56.886: INFO: Created: latency-svc-psd7z Apr 24 13:30:56.898: INFO: Got endpoints: latency-svc-psd7z [825.826016ms] Apr 24 13:30:56.983: INFO: Created: latency-svc-5xxfc Apr 24 13:30:56.993: INFO: Got endpoints: latency-svc-5xxfc [885.084911ms] Apr 24 13:30:57.018: INFO: Created: latency-svc-q9qcr Apr 24 13:30:57.036: INFO: Got endpoints: latency-svc-q9qcr [859.795479ms] Apr 24 13:30:57.061: INFO: Created: latency-svc-q5xt8 Apr 24 13:30:57.072: INFO: Got endpoints: latency-svc-q5xt8 [861.625226ms] Apr 24 13:30:57.133: INFO: Created: latency-svc-6v782 Apr 24 13:30:57.135: INFO: Got endpoints: latency-svc-6v782 [876.722861ms] Apr 24 13:30:57.163: INFO: Created: latency-svc-n82p4 Apr 24 13:30:57.181: INFO: Got endpoints: latency-svc-n82p4 [838.006622ms] Apr 24 13:30:57.206: INFO: Created: latency-svc-gfl9n Apr 24 13:30:57.258: INFO: Got endpoints: latency-svc-gfl9n [833.793681ms] Apr 24 13:30:57.258: INFO: Latencies: [94.803083ms 108.045631ms 142.645662ms 185.471129ms 251.041047ms 286.967929ms 335.028391ms 378.575218ms 416.314651ms 451.979179ms 505.811755ms 554.498881ms 593.054899ms 640.165243ms 646.237555ms 655.642605ms 657.457149ms 665.923478ms 672.871293ms 677.310057ms 677.656044ms 678.751976ms 681.993936ms 682.034842ms 682.758727ms 684.945541ms 686.291239ms 687.742462ms 695.481948ms 695.774479ms 695.827348ms 697.454951ms 699.126466ms 700.430049ms 702.492265ms 703.031793ms 703.999ms 706.417583ms 710.452459ms 712.433488ms 712.632098ms 713.657915ms 716.96043ms 717.227115ms 724.750008ms 724.763559ms 729.76685ms 730.084472ms 730.369174ms 733.612622ms 733.886093ms 733.973967ms 734.134577ms 735.106015ms 735.240257ms 738.447824ms 739.065878ms 739.770138ms 740.053201ms 740.800444ms 741.101179ms 744.583715ms 747.504747ms 748.036249ms 749.248704ms 749.300143ms 751.412919ms 751.720304ms 752.260297ms 753.055923ms 755.551675ms 755.771257ms 756.102666ms 759.089578ms 760.874336ms 764.989356ms 765.755211ms 766.557959ms 767.117119ms 767.594504ms 770.619319ms 770.706699ms 770.970841ms 772.029704ms 773.294802ms 773.483708ms 777.70994ms 778.915969ms 781.759178ms 782.436854ms 782.531816ms 785.183352ms 786.184928ms 786.668794ms 786.739254ms 787.20579ms 787.552366ms 787.849814ms 788.578167ms 788.608516ms 791.36157ms 792.375293ms 792.445964ms 792.678437ms 793.366177ms 794.135575ms 794.349709ms 794.563871ms 794.598546ms 796.638049ms 796.673571ms 796.91644ms 799.366773ms 800.259187ms 800.369393ms 802.869844ms 804.012539ms 804.238803ms 812.787884ms 812.995165ms 813.305171ms 813.50661ms 813.81373ms 814.291355ms 814.390267ms 814.527301ms 815.030613ms 817.064473ms 821.582334ms 822.303842ms 824.058948ms 824.292883ms 825.651769ms 825.68392ms 825.823655ms 825.826016ms 826.465947ms 827.09382ms 827.567887ms 830.330423ms 830.894625ms 831.114438ms 833.399483ms 833.793681ms 836.468159ms 836.842528ms 837.490368ms 838.006622ms 838.655045ms 838.836408ms 839.189317ms 839.244944ms 839.648712ms 842.108824ms 843.296697ms 844.457055ms 844.614853ms 845.162573ms 848.709025ms 852.546501ms 855.535313ms 859.033452ms 859.282806ms 859.795479ms 861.625226ms 862.309121ms 862.738655ms 862.867349ms 866.440241ms 868.198905ms 876.722861ms 877.627491ms 879.133668ms 880.075861ms 880.795642ms 881.053334ms 883.266312ms 884.084468ms 885.084911ms 885.225486ms 885.453448ms 885.658097ms 886.10396ms 894.325339ms 900.728603ms 900.736015ms 902.913425ms 911.170962ms 913.789939ms 914.195315ms 921.223603ms 927.898187ms 928.14966ms 934.114543ms 949.190004ms 952.693415ms 956.378773ms 966.456804ms 972.48138ms 999.499874ms] Apr 24 13:30:57.258: INFO: 50 %ile: 791.36157ms Apr 24 13:30:57.258: INFO: 90 %ile: 885.453448ms Apr 24 13:30:57.258: INFO: 99 %ile: 972.48138ms Apr 24 13:30:57.258: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:30:57.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5492" for this suite. Apr 24 13:31:21.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:31:21.365: INFO: namespace svc-latency-5492 deletion completed in 24.089176308s • [SLOW TEST:39.078 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:31:21.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 24 13:31:21.422: INFO: Waiting up to 5m0s for pod "client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927" in namespace "containers-2694" to be "success or failure" Apr 24 13:31:21.425: INFO: Pod "client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753066ms Apr 24 13:31:23.429: INFO: Pod "client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007706335s Apr 24 13:31:25.434: INFO: Pod "client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012304839s STEP: Saw pod success Apr 24 13:31:25.434: INFO: Pod "client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927" satisfied condition "success or failure" Apr 24 13:31:25.437: INFO: Trying to get logs from node iruya-worker2 pod client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927 container test-container: STEP: delete the pod Apr 24 13:31:25.458: INFO: Waiting for pod client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927 to disappear Apr 24 13:31:25.467: INFO: Pod client-containers-bd83839e-aa0f-4e3c-8d3c-5fc799ada927 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:31:25.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2694" for this suite. Apr 24 13:31:31.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:31:31.610: INFO: namespace containers-2694 deletion completed in 6.139099729s • [SLOW TEST:10.243 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:31:31.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:31:31.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5" in namespace "projected-3154" to be "success or failure" Apr 24 13:31:31.670: INFO: Pod "downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.623351ms Apr 24 13:31:33.674: INFO: Pod "downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025056271s Apr 24 13:31:35.678: INFO: Pod "downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029091165s STEP: Saw pod success Apr 24 13:31:35.678: INFO: Pod "downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5" satisfied condition "success or failure" Apr 24 13:31:35.681: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5 container client-container: STEP: delete the pod Apr 24 13:31:35.726: INFO: Waiting for pod downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5 to disappear Apr 24 13:31:35.817: INFO: Pod downwardapi-volume-429d6cf4-4cae-4897-958c-ce3fac25efc5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:31:35.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3154" for this suite. Apr 24 13:31:41.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:31:41.956: INFO: namespace projected-3154 deletion completed in 6.135741771s • [SLOW TEST:10.346 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:31:41.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-l8km STEP: Creating a pod to test atomic-volume-subpath Apr 24 13:31:42.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-l8km" in namespace "subpath-35" to be "success or failure" Apr 24 13:31:42.110: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Pending", Reason="", readiness=false. Elapsed: 16.055179ms Apr 24 13:31:44.114: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020163334s Apr 24 13:31:46.118: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 4.024208119s Apr 24 13:31:48.123: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 6.028651657s Apr 24 13:31:50.129: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 8.034743408s Apr 24 13:31:52.133: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 10.039543929s Apr 24 13:31:54.138: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 12.044356506s Apr 24 13:31:56.143: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 14.048925128s Apr 24 13:31:58.148: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 16.053878984s Apr 24 13:32:00.152: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 18.057888039s Apr 24 13:32:02.156: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 20.062365776s Apr 24 13:32:04.160: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Running", Reason="", readiness=true. Elapsed: 22.066631521s Apr 24 13:32:06.165: INFO: Pod "pod-subpath-test-secret-l8km": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071307818s STEP: Saw pod success Apr 24 13:32:06.165: INFO: Pod "pod-subpath-test-secret-l8km" satisfied condition "success or failure" Apr 24 13:32:06.168: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-l8km container test-container-subpath-secret-l8km: STEP: delete the pod Apr 24 13:32:06.191: INFO: Waiting for pod pod-subpath-test-secret-l8km to disappear Apr 24 13:32:06.194: INFO: Pod pod-subpath-test-secret-l8km no longer exists STEP: Deleting pod pod-subpath-test-secret-l8km Apr 24 13:32:06.194: INFO: Deleting pod "pod-subpath-test-secret-l8km" in namespace "subpath-35" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:32:06.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-35" for this suite. Apr 24 13:32:12.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:32:12.340: INFO: namespace subpath-35 deletion completed in 6.140553074s • [SLOW TEST:30.384 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:32:12.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-fa383d91-d6a3-4b4d-a155-840a9e9e6175 STEP: Creating a pod to test consume secrets Apr 24 13:32:12.464: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337" in namespace "projected-5490" to be "success or failure" Apr 24 13:32:12.490: INFO: Pod "pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337": Phase="Pending", Reason="", readiness=false. Elapsed: 25.459239ms Apr 24 13:32:14.494: INFO: Pod "pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029672768s Apr 24 13:32:16.499: INFO: Pod "pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034426107s STEP: Saw pod success Apr 24 13:32:16.499: INFO: Pod "pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337" satisfied condition "success or failure" Apr 24 13:32:16.502: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337 container projected-secret-volume-test: STEP: delete the pod Apr 24 13:32:16.523: INFO: Waiting for pod pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337 to disappear Apr 24 13:32:16.534: INFO: Pod pod-projected-secrets-701060c4-844f-4262-9a0a-cfac87714337 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:32:16.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5490" for this suite. Apr 24 13:32:22.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:32:22.673: INFO: namespace projected-5490 deletion completed in 6.136501273s • [SLOW TEST:10.332 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:32:22.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-0be016b5-ecc1-4e66-af6d-56b173797ef9 STEP: Creating configMap with name cm-test-opt-upd-39e17fbc-76bc-406e-b169-bbb29e208512 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0be016b5-ecc1-4e66-af6d-56b173797ef9 STEP: Updating configmap cm-test-opt-upd-39e17fbc-76bc-406e-b169-bbb29e208512 STEP: Creating configMap with name cm-test-opt-create-96e19592-014c-4e8d-ab8c-547e26f29728 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:32:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-848" for this suite. Apr 24 13:32:54.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:32:54.948: INFO: namespace configmap-848 deletion completed in 24.106062197s • [SLOW TEST:32.275 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:32:54.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:32:59.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2610" for this suite. Apr 24 13:33:37.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:33:37.126: INFO: namespace kubelet-test-2610 deletion completed in 38.087451683s • [SLOW TEST:42.178 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:33:37.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 24 13:33:41.200: INFO: Pod pod-hostip-b656dce6-1ac0-417e-879b-7c4a43dcb9fb has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:33:41.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6284" for this suite. Apr 24 13:34:03.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:34:03.310: INFO: namespace pods-6284 deletion completed in 22.105836528s • [SLOW TEST:26.184 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:34:03.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b8cf4e8d-b83f-4c1e-9d50-59133d991895 STEP: Creating a pod to test consume secrets Apr 24 13:34:03.388: INFO: Waiting up to 5m0s for pod "pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23" in namespace "secrets-1538" to be "success or failure" Apr 24 13:34:03.392: INFO: Pod "pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.72608ms Apr 24 13:34:05.422: INFO: Pod "pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034289199s Apr 24 13:34:07.427: INFO: Pod "pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038733163s STEP: Saw pod success Apr 24 13:34:07.427: INFO: Pod "pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23" satisfied condition "success or failure" Apr 24 13:34:07.429: INFO: Trying to get logs from node iruya-worker pod pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23 container secret-volume-test: STEP: delete the pod Apr 24 13:34:07.469: INFO: Waiting for pod pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23 to disappear Apr 24 13:34:07.475: INFO: Pod pod-secrets-acc814d4-4d9a-4364-b88b-a45f6b287c23 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:34:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1538" for this suite. Apr 24 13:34:13.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:34:13.570: INFO: namespace secrets-1538 deletion completed in 6.09229807s • [SLOW TEST:10.260 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:34:13.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 24 13:34:13.649: INFO: PodSpec: initContainers in spec.initContainers Apr 24 13:35:03.111: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2467adc3-8a74-4483-a1b0-1b03f354622c", GenerateName:"", Namespace:"init-container-784", SelfLink:"/api/v1/namespaces/init-container-784/pods/pod-init-2467adc3-8a74-4483-a1b0-1b03f354622c", UID:"490d4011-bcfe-4dea-9432-acde1c08c58e", ResourceVersion:"7184110", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723332053, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"649425758"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v5x5k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002478ec0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v5x5k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v5x5k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v5x5k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002aa6d28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002952900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002aa6db0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002aa6dd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002aa6dd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002aa6ddc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332053, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332053, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332053, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332053, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.43", StartTime:(*v1.Time)(0xc0014a8320), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00258a690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00258a700)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c0dc3148ac89afff0d445f14bb3314c4f1f1754b5be7305425b9ce3aba676b0d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014a8360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014a8340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:35:03.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-784" for this suite. Apr 24 13:35:25.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:35:25.268: INFO: namespace init-container-784 deletion completed in 22.144928242s • [SLOW TEST:71.698 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:35:25.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-9083 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9083 STEP: Deleting pre-stop pod Apr 24 13:35:38.400: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:35:38.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9083" for this suite. Apr 24 13:36:16.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:36:16.570: INFO: namespace prestop-9083 deletion completed in 38.114454181s • [SLOW TEST:51.302 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:36:16.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 24 13:36:21.209: INFO: Successfully updated pod "annotationupdate2fdf8c12-e6ed-4687-91b8-87de0a4b4bce" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:36:23.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8084" for this suite. Apr 24 13:36:45.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:36:45.420: INFO: namespace downward-api-8084 deletion completed in 22.096620471s • [SLOW TEST:28.850 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:36:45.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:36:45.485: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0" in namespace "projected-5845" to be "success or failure" Apr 24 13:36:45.532: INFO: Pod "downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.674736ms Apr 24 13:36:47.539: INFO: Pod "downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053740572s Apr 24 13:36:49.544: INFO: Pod "downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058450449s STEP: Saw pod success Apr 24 13:36:49.544: INFO: Pod "downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0" satisfied condition "success or failure" Apr 24 13:36:49.547: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0 container client-container: STEP: delete the pod Apr 24 13:36:49.569: INFO: Waiting for pod downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0 to disappear Apr 24 13:36:49.590: INFO: Pod downwardapi-volume-5ec5d7b8-a3f0-4dd7-bb34-ef742d3242b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:36:49.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5845" for this suite. Apr 24 13:36:55.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:36:55.686: INFO: namespace projected-5845 deletion completed in 6.092468166s • [SLOW TEST:10.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:36:55.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-pc42 STEP: Creating a pod to test atomic-volume-subpath Apr 24 13:36:55.779: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pc42" in namespace "subpath-8963" to be "success or failure" Apr 24 13:36:55.809: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Pending", Reason="", readiness=false. Elapsed: 30.431963ms Apr 24 13:36:57.814: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035027265s Apr 24 13:36:59.819: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 4.039724806s Apr 24 13:37:01.822: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 6.043544609s Apr 24 13:37:03.826: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 8.047515086s Apr 24 13:37:05.831: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 10.052320736s Apr 24 13:37:07.835: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 12.056394128s Apr 24 13:37:09.841: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 14.062177076s Apr 24 13:37:11.846: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 16.066909573s Apr 24 13:37:13.850: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 18.071135388s Apr 24 13:37:15.855: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 20.075765139s Apr 24 13:37:17.859: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Running", Reason="", readiness=true. Elapsed: 22.079583289s Apr 24 13:37:19.862: INFO: Pod "pod-subpath-test-configmap-pc42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083479885s STEP: Saw pod success Apr 24 13:37:19.862: INFO: Pod "pod-subpath-test-configmap-pc42" satisfied condition "success or failure" Apr 24 13:37:19.865: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-pc42 container test-container-subpath-configmap-pc42: STEP: delete the pod Apr 24 13:37:19.905: INFO: Waiting for pod pod-subpath-test-configmap-pc42 to disappear Apr 24 13:37:19.926: INFO: Pod pod-subpath-test-configmap-pc42 no longer exists STEP: Deleting pod pod-subpath-test-configmap-pc42 Apr 24 13:37:19.926: INFO: Deleting pod "pod-subpath-test-configmap-pc42" in namespace "subpath-8963" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:37:19.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8963" for this suite. Apr 24 13:37:25.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:37:26.041: INFO: namespace subpath-8963 deletion completed in 6.109669869s • [SLOW TEST:30.353 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:37:26.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:37:26.138: INFO: Waiting up to 5m0s for pod "downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200" in namespace "projected-8159" to be "success or failure" Apr 24 13:37:26.142: INFO: Pod "downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200": Phase="Pending", Reason="", readiness=false. Elapsed: 3.28884ms Apr 24 13:37:28.146: INFO: Pod "downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007413169s Apr 24 13:37:30.150: INFO: Pod "downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011963052s STEP: Saw pod success Apr 24 13:37:30.150: INFO: Pod "downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200" satisfied condition "success or failure" Apr 24 13:37:30.154: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200 container client-container: STEP: delete the pod Apr 24 13:37:30.188: INFO: Waiting for pod downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200 to disappear Apr 24 13:37:30.203: INFO: Pod downwardapi-volume-566998c9-ff8c-4955-a85d-2b39e7745200 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:37:30.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8159" for this suite. Apr 24 13:37:36.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:37:36.294: INFO: namespace projected-8159 deletion completed in 6.088649446s • [SLOW TEST:10.253 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:37:36.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1478, will wait for the garbage collector to delete the pods Apr 24 13:37:42.431: INFO: Deleting Job.batch foo took: 6.530193ms Apr 24 13:37:42.731: INFO: Terminating Job.batch foo pods took: 300.255591ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:38:22.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1478" for this suite. Apr 24 13:38:28.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:38:28.340: INFO: namespace job-1478 deletion completed in 6.103218781s • [SLOW TEST:52.046 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:38:28.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:38:28.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7" in namespace "downward-api-2638" to be "success or failure" Apr 24 13:38:28.437: INFO: Pod "downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.150991ms Apr 24 13:38:30.441: INFO: Pod "downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03937141s Apr 24 13:38:32.446: INFO: Pod "downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043952416s STEP: Saw pod success Apr 24 13:38:32.446: INFO: Pod "downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7" satisfied condition "success or failure" Apr 24 13:38:32.449: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7 container client-container: STEP: delete the pod Apr 24 13:38:32.487: INFO: Waiting for pod downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7 to disappear Apr 24 13:38:32.497: INFO: Pod downwardapi-volume-94386f9b-aafa-4348-8553-5e5c940120c7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:38:32.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2638" for this suite. Apr 24 13:38:38.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:38:38.623: INFO: namespace downward-api-2638 deletion completed in 6.122437349s • [SLOW TEST:10.283 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:38:38.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:38:38.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32" in namespace "projected-2356" to be "success or failure" Apr 24 13:38:38.713: INFO: Pod "downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081467ms Apr 24 13:38:40.717: INFO: Pod "downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008301888s Apr 24 13:38:42.722: INFO: Pod "downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012944124s STEP: Saw pod success Apr 24 13:38:42.722: INFO: Pod "downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32" satisfied condition "success or failure" Apr 24 13:38:42.725: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32 container client-container: STEP: delete the pod Apr 24 13:38:42.853: INFO: Waiting for pod downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32 to disappear Apr 24 13:38:42.862: INFO: Pod downwardapi-volume-622b636d-281e-45b8-9f62-145c9cc77d32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:38:42.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2356" for this suite. Apr 24 13:38:48.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:38:48.975: INFO: namespace projected-2356 deletion completed in 6.10752221s • [SLOW TEST:10.351 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:38:48.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 13:38:49.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-23' Apr 24 13:38:51.380: INFO: stderr: "" Apr 24 13:38:51.380: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 24 13:38:56.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-23 -o json' Apr 24 13:38:56.533: INFO: stderr: "" Apr 24 13:38:56.533: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-24T13:38:51Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-23\",\n \"resourceVersion\": \"7184827\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-23/pods/e2e-test-nginx-pod\",\n \"uid\": \"a36e6798-b970-4c23-a154-c3f2670b1827\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wpnzq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wpnzq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wpnzq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T13:38:51Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T13:38:54Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T13:38:54Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T13:38:51Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3b6c422df303e2e7736748b91e57b3433df014ee7734a4e1d9e7d2e7c8050bd8\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-24T13:38:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.50\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-24T13:38:51Z\"\n }\n}\n" STEP: replace the image in the pod Apr 24 13:38:56.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-23' Apr 24 13:38:56.798: INFO: stderr: "" Apr 24 13:38:56.798: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 24 13:38:56.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-23' Apr 24 13:39:00.540: INFO: stderr: "" Apr 24 13:39:00.540: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:39:00.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-23" for this suite. Apr 24 13:39:06.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:39:06.634: INFO: namespace kubectl-23 deletion completed in 6.091351933s • [SLOW TEST:17.660 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:39:06.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-6lhs STEP: Creating a pod to test atomic-volume-subpath Apr 24 13:39:06.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6lhs" in namespace "subpath-7937" to be "success or failure" Apr 24 13:39:06.713: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.879604ms Apr 24 13:39:08.716: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007353873s Apr 24 13:39:10.720: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 4.011292634s Apr 24 13:39:12.724: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 6.01493913s Apr 24 13:39:14.728: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 8.019131599s Apr 24 13:39:16.732: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 10.023200158s Apr 24 13:39:18.737: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 12.028016321s Apr 24 13:39:20.741: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 14.032700899s Apr 24 13:39:22.746: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 16.037284634s Apr 24 13:39:24.751: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 18.041840503s Apr 24 13:39:26.755: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 20.046383574s Apr 24 13:39:28.759: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Running", Reason="", readiness=true. Elapsed: 22.050719283s Apr 24 13:39:30.763: INFO: Pod "pod-subpath-test-projected-6lhs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054227385s STEP: Saw pod success Apr 24 13:39:30.763: INFO: Pod "pod-subpath-test-projected-6lhs" satisfied condition "success or failure" Apr 24 13:39:30.765: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-6lhs container test-container-subpath-projected-6lhs: STEP: delete the pod Apr 24 13:39:30.782: INFO: Waiting for pod pod-subpath-test-projected-6lhs to disappear Apr 24 13:39:30.785: INFO: Pod pod-subpath-test-projected-6lhs no longer exists STEP: Deleting pod pod-subpath-test-projected-6lhs Apr 24 13:39:30.785: INFO: Deleting pod "pod-subpath-test-projected-6lhs" in namespace "subpath-7937" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:39:30.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7937" for this suite. Apr 24 13:39:36.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:39:36.880: INFO: namespace subpath-7937 deletion completed in 6.089751971s • [SLOW TEST:30.245 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:39:36.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-04a57720-4676-4d6f-8292-a039c2958f5c STEP: Creating secret with name secret-projected-all-test-volume-52d90a41-d262-47d5-bb21-d7469d197b4c STEP: Creating a pod to test Check all projections for projected volume plugin Apr 24 13:39:36.956: INFO: Waiting up to 5m0s for pod "projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702" in namespace "projected-5555" to be "success or failure" Apr 24 13:39:36.959: INFO: Pod "projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248376ms Apr 24 13:39:38.964: INFO: Pod "projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008153512s Apr 24 13:39:40.969: INFO: Pod "projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702": Phase="Running", Reason="", readiness=true. Elapsed: 4.012535729s Apr 24 13:39:42.973: INFO: Pod "projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016642483s STEP: Saw pod success Apr 24 13:39:42.973: INFO: Pod "projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702" satisfied condition "success or failure" Apr 24 13:39:42.976: INFO: Trying to get logs from node iruya-worker pod projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702 container projected-all-volume-test: STEP: delete the pod Apr 24 13:39:42.996: INFO: Waiting for pod projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702 to disappear Apr 24 13:39:43.014: INFO: Pod projected-volume-f83c21ba-5956-4645-aaa4-274b7300c702 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:39:43.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5555" for this suite. Apr 24 13:39:49.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:39:49.114: INFO: namespace projected-5555 deletion completed in 6.096242192s • [SLOW TEST:12.234 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:39:49.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2513.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2513.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 13:39:55.219: INFO: DNS probes using dns-test-164d0194-6502-4812-ba7b-07d61e5fdeb0 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2513.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2513.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 13:40:01.343: INFO: File wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:01.347: INFO: File jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:01.347: INFO: Lookups using dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 failed for: [wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local] Apr 24 13:40:06.351: INFO: File wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:06.355: INFO: File jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:06.355: INFO: Lookups using dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 failed for: [wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local] Apr 24 13:40:11.352: INFO: File wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:11.355: INFO: File jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:11.356: INFO: Lookups using dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 failed for: [wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local] Apr 24 13:40:16.353: INFO: File wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:16.357: INFO: File jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:16.357: INFO: Lookups using dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 failed for: [wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local] Apr 24 13:40:21.352: INFO: File wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:21.356: INFO: File jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local from pod dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 13:40:21.356: INFO: Lookups using dns-2513/dns-test-9718005c-9922-45ad-b970-2021edd5d472 failed for: [wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local] Apr 24 13:40:26.355: INFO: DNS probes using dns-test-9718005c-9922-45ad-b970-2021edd5d472 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2513.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2513.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2513.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2513.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 13:40:32.732: INFO: DNS probes using dns-test-4e626184-de0c-4aa1-8a52-c1e6abcf2423 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:40:32.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2513" for this suite. Apr 24 13:40:38.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:40:38.954: INFO: namespace dns-2513 deletion completed in 6.134578399s • [SLOW TEST:49.839 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:40:38.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:41:39.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7025" for this suite. Apr 24 13:42:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:42:01.195: INFO: namespace container-probe-7025 deletion completed in 22.092040735s • [SLOW TEST:82.241 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:42:01.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ec648cfc-5400-44c0-a098-2c753843aa17 STEP: Creating a pod to test consume secrets Apr 24 13:42:01.339: INFO: Waiting up to 5m0s for pod "pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea" in namespace "secrets-25" to be "success or failure" Apr 24 13:42:01.357: INFO: Pod "pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea": Phase="Pending", Reason="", readiness=false. Elapsed: 18.717201ms Apr 24 13:42:03.361: INFO: Pod "pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022834036s Apr 24 13:42:05.365: INFO: Pod "pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026281455s STEP: Saw pod success Apr 24 13:42:05.365: INFO: Pod "pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea" satisfied condition "success or failure" Apr 24 13:42:05.367: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea container secret-volume-test: STEP: delete the pod Apr 24 13:42:05.382: INFO: Waiting for pod pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea to disappear Apr 24 13:42:05.400: INFO: Pod pod-secrets-4569858f-ce71-4dd7-81ba-913fe03a22ea no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:42:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-25" for this suite. Apr 24 13:42:11.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:42:11.519: INFO: namespace secrets-25 deletion completed in 6.115856024s STEP: Destroying namespace "secret-namespace-3236" for this suite. Apr 24 13:42:17.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:42:17.636: INFO: namespace secret-namespace-3236 deletion completed in 6.116892447s • [SLOW TEST:16.441 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:42:17.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 24 13:42:17.692: INFO: Waiting up to 5m0s for pod "pod-cf17ded1-a2d5-43db-aebe-fc533564163d" in namespace "emptydir-2803" to be "success or failure" Apr 24 13:42:17.729: INFO: Pod "pod-cf17ded1-a2d5-43db-aebe-fc533564163d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.754553ms Apr 24 13:42:19.733: INFO: Pod "pod-cf17ded1-a2d5-43db-aebe-fc533564163d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040808045s Apr 24 13:42:21.736: INFO: Pod "pod-cf17ded1-a2d5-43db-aebe-fc533564163d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043902402s STEP: Saw pod success Apr 24 13:42:21.736: INFO: Pod "pod-cf17ded1-a2d5-43db-aebe-fc533564163d" satisfied condition "success or failure" Apr 24 13:42:21.739: INFO: Trying to get logs from node iruya-worker pod pod-cf17ded1-a2d5-43db-aebe-fc533564163d container test-container: STEP: delete the pod Apr 24 13:42:21.765: INFO: Waiting for pod pod-cf17ded1-a2d5-43db-aebe-fc533564163d to disappear Apr 24 13:42:21.776: INFO: Pod pod-cf17ded1-a2d5-43db-aebe-fc533564163d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:42:21.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2803" for this suite. Apr 24 13:42:27.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:42:27.915: INFO: namespace emptydir-2803 deletion completed in 6.136363689s • [SLOW TEST:10.279 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:42:27.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 24 13:42:28.501: INFO: created pod pod-service-account-defaultsa Apr 24 13:42:28.501: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 24 13:42:28.508: INFO: created pod pod-service-account-mountsa Apr 24 13:42:28.508: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 24 13:42:28.531: INFO: created pod pod-service-account-nomountsa Apr 24 13:42:28.531: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 24 13:42:28.599: INFO: created pod pod-service-account-defaultsa-mountspec Apr 24 13:42:28.599: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 24 13:42:28.616: INFO: created pod pod-service-account-mountsa-mountspec Apr 24 13:42:28.617: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 24 13:42:28.659: INFO: created pod pod-service-account-nomountsa-mountspec Apr 24 13:42:28.659: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 24 13:42:28.724: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 24 13:42:28.724: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 24 13:42:28.729: INFO: created pod pod-service-account-mountsa-nomountspec Apr 24 13:42:28.729: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 24 13:42:28.756: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 24 13:42:28.756: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:42:28.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-94" for this suite. Apr 24 13:42:54.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:42:54.946: INFO: namespace svcaccounts-94 deletion completed in 26.131944156s • [SLOW TEST:27.030 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:42:54.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 24 13:42:55.007: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:43:12.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2121" for this suite. Apr 24 13:43:18.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:43:18.830: INFO: namespace pods-2121 deletion completed in 6.629714146s • [SLOW TEST:23.884 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:43:18.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0424 13:43:19.916177 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 13:43:19.916: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:43:19.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9334" for this suite. Apr 24 13:43:25.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:43:26.065: INFO: namespace gc-9334 deletion completed in 6.14600036s • [SLOW TEST:7.235 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:43:26.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 24 13:43:26.158: INFO: Waiting up to 5m0s for pod "pod-ec343425-0fd8-4607-a576-a69c59007e9b" in namespace "emptydir-88" to be "success or failure" Apr 24 13:43:26.161: INFO: Pod "pod-ec343425-0fd8-4607-a576-a69c59007e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.215623ms Apr 24 13:43:28.165: INFO: Pod "pod-ec343425-0fd8-4607-a576-a69c59007e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007134484s Apr 24 13:43:30.169: INFO: Pod "pod-ec343425-0fd8-4607-a576-a69c59007e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011097033s STEP: Saw pod success Apr 24 13:43:30.169: INFO: Pod "pod-ec343425-0fd8-4607-a576-a69c59007e9b" satisfied condition "success or failure" Apr 24 13:43:30.172: INFO: Trying to get logs from node iruya-worker2 pod pod-ec343425-0fd8-4607-a576-a69c59007e9b container test-container: STEP: delete the pod Apr 24 13:43:30.192: INFO: Waiting for pod pod-ec343425-0fd8-4607-a576-a69c59007e9b to disappear Apr 24 13:43:30.197: INFO: Pod pod-ec343425-0fd8-4607-a576-a69c59007e9b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:43:30.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-88" for this suite. Apr 24 13:43:36.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:43:36.327: INFO: namespace emptydir-88 deletion completed in 6.127708049s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:43:36.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a4421122-f69d-4983-be06-3ab28318b079 STEP: Creating a pod to test consume secrets Apr 24 13:43:36.397: INFO: Waiting up to 5m0s for pod "pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19" in namespace "secrets-4508" to be "success or failure" Apr 24 13:43:36.402: INFO: Pod "pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701341ms Apr 24 13:43:38.406: INFO: Pod "pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009384539s Apr 24 13:43:40.628: INFO: Pod "pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231408676s STEP: Saw pod success Apr 24 13:43:40.628: INFO: Pod "pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19" satisfied condition "success or failure" Apr 24 13:43:40.632: INFO: Trying to get logs from node iruya-worker pod pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19 container secret-env-test: STEP: delete the pod Apr 24 13:43:40.655: INFO: Waiting for pod pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19 to disappear Apr 24 13:43:40.658: INFO: Pod pod-secrets-7893c918-35d1-4274-bde8-4f71ed4e5a19 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:43:40.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4508" for this suite. Apr 24 13:43:46.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:43:46.755: INFO: namespace secrets-4508 deletion completed in 6.093260531s • [SLOW TEST:10.427 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:43:46.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 24 13:43:46.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 24 13:43:46.978: INFO: stderr: "" Apr 24 13:43:46.978: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:43:46.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1856" for this suite. Apr 24 13:43:52.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:43:53.100: INFO: namespace kubectl-1856 deletion completed in 6.116807547s • [SLOW TEST:6.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:43:53.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-756d1743-32ac-4ae4-9693-936648a3e64c STEP: Creating a pod to test consume secrets Apr 24 13:43:53.211: INFO: Waiting up to 5m0s for pod "pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c" in namespace "secrets-5724" to be "success or failure" Apr 24 13:43:53.221: INFO: Pod "pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.671436ms Apr 24 13:43:55.257: INFO: Pod "pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045898508s Apr 24 13:43:57.261: INFO: Pod "pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c": Phase="Running", Reason="", readiness=true. Elapsed: 4.050012512s Apr 24 13:43:59.266: INFO: Pod "pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054489248s STEP: Saw pod success Apr 24 13:43:59.266: INFO: Pod "pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c" satisfied condition "success or failure" Apr 24 13:43:59.269: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c container secret-volume-test: STEP: delete the pod Apr 24 13:43:59.288: INFO: Waiting for pod pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c to disappear Apr 24 13:43:59.306: INFO: Pod pod-secrets-28cb0f9a-3386-4c61-9963-ea66d225a98c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:43:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5724" for this suite. Apr 24 13:44:05.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:44:05.415: INFO: namespace secrets-5724 deletion completed in 6.106174044s • [SLOW TEST:12.315 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:44:05.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 24 13:44:05.481: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 24 13:44:05.892: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 24 13:44:08.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332645, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332645, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332645, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332645, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:44:10.783: INFO: Waited 728.454658ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:44:11.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6141" for this suite. Apr 24 13:44:17.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:44:17.448: INFO: namespace aggregator-6141 deletion completed in 6.229912183s • [SLOW TEST:12.032 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:44:17.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:44:17.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3531" for this suite. Apr 24 13:44:23.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:44:23.705: INFO: namespace kubelet-test-3531 deletion completed in 6.099743016s • [SLOW TEST:6.257 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:44:23.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-716a225e-d05c-46a4-ab66-352e0771df42 in namespace container-probe-2024 Apr 24 13:44:27.825: INFO: Started pod test-webserver-716a225e-d05c-46a4-ab66-352e0771df42 in namespace container-probe-2024 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 13:44:27.828: INFO: Initial restart count of pod test-webserver-716a225e-d05c-46a4-ab66-352e0771df42 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:48:28.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2024" for this suite. Apr 24 13:48:34.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:48:34.946: INFO: namespace container-probe-2024 deletion completed in 6.139615331s • [SLOW TEST:251.240 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:48:34.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:48:35.015: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 24 13:48:40.020: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 24 13:48:40.020: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 24 13:48:42.024: INFO: Creating deployment "test-rollover-deployment" Apr 24 13:48:42.052: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 24 13:48:44.059: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 24 13:48:44.066: INFO: Ensure that both replica sets have 1 created replica Apr 24 13:48:44.071: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 24 13:48:44.078: INFO: Updating deployment test-rollover-deployment Apr 24 13:48:44.078: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 24 13:48:46.094: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 24 13:48:46.099: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 24 13:48:46.105: INFO: all replica sets need to contain the pod-template-hash label Apr 24 13:48:46.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332924, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:48:48.114: INFO: all replica sets need to contain the pod-template-hash label Apr 24 13:48:48.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332927, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:48:50.113: INFO: all replica sets need to contain the pod-template-hash label Apr 24 13:48:50.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332927, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:48:52.114: INFO: all replica sets need to contain the pod-template-hash label Apr 24 13:48:52.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332927, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:48:54.113: INFO: all replica sets need to contain the pod-template-hash label Apr 24 13:48:54.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332927, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:48:56.113: INFO: all replica sets need to contain the pod-template-hash label Apr 24 13:48:56.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332927, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723332922, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 13:48:58.112: INFO: Apr 24 13:48:58.113: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 24 13:48:58.121: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8095,SelfLink:/apis/apps/v1/namespaces/deployment-8095/deployments/test-rollover-deployment,UID:2f89dde6-39ef-432d-8efe-c30d8247162e,ResourceVersion:7186714,Generation:2,CreationTimestamp:2020-04-24 13:48:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-24 13:48:42 +0000 UTC 2020-04-24 13:48:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-24 13:48:57 +0000 UTC 2020-04-24 13:48:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 24 13:48:58.124: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8095,SelfLink:/apis/apps/v1/namespaces/deployment-8095/replicasets/test-rollover-deployment-854595fc44,UID:5f001ede-099b-4b0b-9668-107a51701f01,ResourceVersion:7186703,Generation:2,CreationTimestamp:2020-04-24 13:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2f89dde6-39ef-432d-8efe-c30d8247162e 0xc001b555a7 0xc001b555a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 24 13:48:58.124: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 24 13:48:58.125: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8095,SelfLink:/apis/apps/v1/namespaces/deployment-8095/replicasets/test-rollover-controller,UID:b651967b-2fb8-481f-997e-7f24d656213a,ResourceVersion:7186713,Generation:2,CreationTimestamp:2020-04-24 13:48:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2f89dde6-39ef-432d-8efe-c30d8247162e 0xc001b553ff 0xc001b55420}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 24 13:48:58.125: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8095,SelfLink:/apis/apps/v1/namespaces/deployment-8095/replicasets/test-rollover-deployment-9b8b997cf,UID:10a85c24-3d5d-44ee-8ee1-090fb795dbc1,ResourceVersion:7186667,Generation:2,CreationTimestamp:2020-04-24 13:48:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2f89dde6-39ef-432d-8efe-c30d8247162e 0xc001b55780 0xc001b55781}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 24 13:48:58.128: INFO: Pod "test-rollover-deployment-854595fc44-hm9rw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-hm9rw,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8095,SelfLink:/api/v1/namespaces/deployment-8095/pods/test-rollover-deployment-854595fc44-hm9rw,UID:1e045b3c-f09b-4e48-9239-09e6f94326d5,ResourceVersion:7186681,Generation:0,CreationTimestamp:2020-04-24 13:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 5f001ede-099b-4b0b-9668-107a51701f01 0xc003194697 0xc003194698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x4zk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x4zk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x4zk2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003194710} {node.kubernetes.io/unreachable Exists NoExecute 0xc003194730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:48:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:48:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:48:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 13:48:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.107,StartTime:2020-04-24 13:48:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-24 13:48:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://27b18f5407e262033b92b08e4d103fa3fcf54136659b33ae059c4b83055fcc52}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:48:58.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8095" for this suite. Apr 24 13:49:04.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:49:04.208: INFO: namespace deployment-8095 deletion completed in 6.076842205s • [SLOW TEST:29.262 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:49:04.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 24 13:49:04.268: INFO: Waiting up to 5m0s for pod "pod-57a966aa-089c-4281-9a84-198be3fefb1e" in namespace "emptydir-558" to be "success or failure" Apr 24 13:49:04.273: INFO: Pod "pod-57a966aa-089c-4281-9a84-198be3fefb1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447986ms Apr 24 13:49:06.277: INFO: Pod "pod-57a966aa-089c-4281-9a84-198be3fefb1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009043224s Apr 24 13:49:08.282: INFO: Pod "pod-57a966aa-089c-4281-9a84-198be3fefb1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013409564s STEP: Saw pod success Apr 24 13:49:08.282: INFO: Pod "pod-57a966aa-089c-4281-9a84-198be3fefb1e" satisfied condition "success or failure" Apr 24 13:49:08.285: INFO: Trying to get logs from node iruya-worker pod pod-57a966aa-089c-4281-9a84-198be3fefb1e container test-container: STEP: delete the pod Apr 24 13:49:08.304: INFO: Waiting for pod pod-57a966aa-089c-4281-9a84-198be3fefb1e to disappear Apr 24 13:49:08.309: INFO: Pod pod-57a966aa-089c-4281-9a84-198be3fefb1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:49:08.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-558" for this suite. Apr 24 13:49:14.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:49:14.419: INFO: namespace emptydir-558 deletion completed in 6.10712598s • [SLOW TEST:10.210 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:49:14.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 24 13:49:14.568: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:14.570: INFO: Number of nodes with available pods: 0 Apr 24 13:49:14.570: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:15.575: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:15.578: INFO: Number of nodes with available pods: 0 Apr 24 13:49:15.578: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:16.575: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:16.579: INFO: Number of nodes with available pods: 0 Apr 24 13:49:16.579: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:17.575: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:17.588: INFO: Number of nodes with available pods: 1 Apr 24 13:49:17.588: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:18.575: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:18.579: INFO: Number of nodes with available pods: 2 Apr 24 13:49:18.579: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 24 13:49:18.612: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:18.622: INFO: Number of nodes with available pods: 1 Apr 24 13:49:18.622: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:19.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:19.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:19.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:20.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:20.630: INFO: Number of nodes with available pods: 1 Apr 24 13:49:20.630: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:21.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:21.632: INFO: Number of nodes with available pods: 1 Apr 24 13:49:21.632: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:22.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:22.629: INFO: Number of nodes with available pods: 1 Apr 24 13:49:22.629: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:23.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:23.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:23.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:24.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:24.632: INFO: Number of nodes with available pods: 1 Apr 24 13:49:24.632: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:25.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:25.630: INFO: Number of nodes with available pods: 1 Apr 24 13:49:25.630: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:26.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:26.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:26.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:27.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:27.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:27.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:28.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:28.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:28.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:29.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:29.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:29.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:30.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:30.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:30.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:31.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:31.629: INFO: Number of nodes with available pods: 1 Apr 24 13:49:31.629: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:32.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:32.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:32.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:33.651: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:33.654: INFO: Number of nodes with available pods: 1 Apr 24 13:49:33.654: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:34.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:34.631: INFO: Number of nodes with available pods: 1 Apr 24 13:49:34.631: INFO: Node iruya-worker is running more than one daemon pod Apr 24 13:49:35.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 13:49:35.631: INFO: Number of nodes with available pods: 2 Apr 24 13:49:35.631: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1135, will wait for the garbage collector to delete the pods Apr 24 13:49:35.694: INFO: Deleting DaemonSet.extensions daemon-set took: 6.852315ms Apr 24 13:49:35.995: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.257056ms Apr 24 13:49:38.898: INFO: Number of nodes with available pods: 0 Apr 24 13:49:38.898: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 13:49:38.901: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1135/daemonsets","resourceVersion":"7186910"},"items":null} Apr 24 13:49:38.904: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1135/pods","resourceVersion":"7186910"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:49:38.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1135" for this suite. Apr 24 13:49:44.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:49:45.000: INFO: namespace daemonsets-1135 deletion completed in 6.084671729s • [SLOW TEST:30.580 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:49:45.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 in namespace container-probe-4322 Apr 24 13:49:49.146: INFO: Started pod liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 in namespace container-probe-4322 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 13:49:49.150: INFO: Initial restart count of pod liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 is 0 Apr 24 13:50:03.189: INFO: Restart count of pod container-probe-4322/liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 is now 1 (14.039007526s elapsed) Apr 24 13:50:23.274: INFO: Restart count of pod container-probe-4322/liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 is now 2 (34.124123559s elapsed) Apr 24 13:50:43.345: INFO: Restart count of pod container-probe-4322/liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 is now 3 (54.194740066s elapsed) Apr 24 13:51:03.430: INFO: Restart count of pod container-probe-4322/liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 is now 4 (1m14.279795001s elapsed) Apr 24 13:52:03.575: INFO: Restart count of pod container-probe-4322/liveness-e47d93f7-7ea3-4cdc-a68a-42fbb3a49128 is now 5 (2m14.425313542s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:52:03.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4322" for this suite. Apr 24 13:52:09.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:52:09.684: INFO: namespace container-probe-4322 deletion completed in 6.084372965s • [SLOW TEST:144.683 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:52:09.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 24 13:52:19.846: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:19.846: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:19.883066 6 log.go:172] (0xc001136370) (0xc0019fe960) Create stream I0424 13:52:19.883101 6 log.go:172] (0xc001136370) (0xc0019fe960) Stream added, broadcasting: 1 I0424 13:52:19.885803 6 log.go:172] (0xc001136370) Reply frame received for 1 I0424 13:52:19.885857 6 log.go:172] (0xc001136370) (0xc0029cc1e0) Create stream I0424 13:52:19.885873 6 log.go:172] (0xc001136370) (0xc0029cc1e0) Stream added, broadcasting: 3 I0424 13:52:19.886672 6 log.go:172] (0xc001136370) Reply frame received for 3 I0424 13:52:19.886728 6 log.go:172] (0xc001136370) (0xc0029cc280) Create stream I0424 13:52:19.886764 6 log.go:172] (0xc001136370) (0xc0029cc280) Stream added, broadcasting: 5 I0424 13:52:19.887645 6 log.go:172] (0xc001136370) Reply frame received for 5 I0424 13:52:19.985921 6 log.go:172] (0xc001136370) Data frame received for 5 I0424 13:52:19.985946 6 log.go:172] (0xc0029cc280) (5) Data frame handling I0424 13:52:19.985963 6 log.go:172] (0xc001136370) Data frame received for 3 I0424 13:52:19.985973 6 log.go:172] (0xc0029cc1e0) (3) Data frame handling I0424 13:52:19.985985 6 log.go:172] (0xc0029cc1e0) (3) Data frame sent I0424 13:52:19.985995 6 log.go:172] (0xc001136370) Data frame received for 3 I0424 13:52:19.986004 6 log.go:172] (0xc0029cc1e0) (3) Data frame handling I0424 13:52:19.986891 6 log.go:172] (0xc001136370) Data frame received for 1 I0424 13:52:19.986904 6 log.go:172] (0xc0019fe960) (1) Data frame handling I0424 13:52:19.986912 6 log.go:172] (0xc0019fe960) (1) Data frame sent I0424 13:52:19.986920 6 log.go:172] (0xc001136370) (0xc0019fe960) Stream removed, broadcasting: 1 I0424 13:52:19.986942 6 log.go:172] (0xc001136370) Go away received I0424 13:52:19.986985 6 log.go:172] (0xc001136370) (0xc0019fe960) Stream removed, broadcasting: 1 I0424 13:52:19.987016 6 log.go:172] (0xc001136370) (0xc0029cc1e0) Stream removed, broadcasting: 3 I0424 13:52:19.987040 6 log.go:172] (0xc001136370) (0xc0029cc280) Stream removed, broadcasting: 5 Apr 24 13:52:19.987: INFO: Exec stderr: "" Apr 24 13:52:19.987: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:19.987: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.016266 6 log.go:172] (0xc001f24c60) (0xc0029cc5a0) Create stream I0424 13:52:20.016291 6 log.go:172] (0xc001f24c60) (0xc0029cc5a0) Stream added, broadcasting: 1 I0424 13:52:20.018687 6 log.go:172] (0xc001f24c60) Reply frame received for 1 I0424 13:52:20.018711 6 log.go:172] (0xc001f24c60) (0xc0010a6640) Create stream I0424 13:52:20.018717 6 log.go:172] (0xc001f24c60) (0xc0010a6640) Stream added, broadcasting: 3 I0424 13:52:20.019774 6 log.go:172] (0xc001f24c60) Reply frame received for 3 I0424 13:52:20.019793 6 log.go:172] (0xc001f24c60) (0xc0019fea00) Create stream I0424 13:52:20.019805 6 log.go:172] (0xc001f24c60) (0xc0019fea00) Stream added, broadcasting: 5 I0424 13:52:20.020785 6 log.go:172] (0xc001f24c60) Reply frame received for 5 I0424 13:52:20.084198 6 log.go:172] (0xc001f24c60) Data frame received for 5 I0424 13:52:20.084241 6 log.go:172] (0xc0019fea00) (5) Data frame handling I0424 13:52:20.084270 6 log.go:172] (0xc001f24c60) Data frame received for 3 I0424 13:52:20.084295 6 log.go:172] (0xc0010a6640) (3) Data frame handling I0424 13:52:20.084321 6 log.go:172] (0xc0010a6640) (3) Data frame sent I0424 13:52:20.084332 6 log.go:172] (0xc001f24c60) Data frame received for 3 I0424 13:52:20.084339 6 log.go:172] (0xc0010a6640) (3) Data frame handling I0424 13:52:20.085764 6 log.go:172] (0xc001f24c60) Data frame received for 1 I0424 13:52:20.085822 6 log.go:172] (0xc0029cc5a0) (1) Data frame handling I0424 13:52:20.085893 6 log.go:172] (0xc0029cc5a0) (1) Data frame sent I0424 13:52:20.085925 6 log.go:172] (0xc001f24c60) (0xc0029cc5a0) Stream removed, broadcasting: 1 I0424 13:52:20.085952 6 log.go:172] (0xc001f24c60) Go away received I0424 13:52:20.086054 6 log.go:172] (0xc001f24c60) (0xc0029cc5a0) Stream removed, broadcasting: 1 I0424 13:52:20.086072 6 log.go:172] (0xc001f24c60) (0xc0010a6640) Stream removed, broadcasting: 3 I0424 13:52:20.086081 6 log.go:172] (0xc001f24c60) (0xc0019fea00) Stream removed, broadcasting: 5 Apr 24 13:52:20.086: INFO: Exec stderr: "" Apr 24 13:52:20.086: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.086: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.114625 6 log.go:172] (0xc0013e4580) (0xc00122b680) Create stream I0424 13:52:20.114648 6 log.go:172] (0xc0013e4580) (0xc00122b680) Stream added, broadcasting: 1 I0424 13:52:20.116777 6 log.go:172] (0xc0013e4580) Reply frame received for 1 I0424 13:52:20.116806 6 log.go:172] (0xc0013e4580) (0xc0029cc640) Create stream I0424 13:52:20.116818 6 log.go:172] (0xc0013e4580) (0xc0029cc640) Stream added, broadcasting: 3 I0424 13:52:20.117908 6 log.go:172] (0xc0013e4580) Reply frame received for 3 I0424 13:52:20.117934 6 log.go:172] (0xc0013e4580) (0xc0019febe0) Create stream I0424 13:52:20.117944 6 log.go:172] (0xc0013e4580) (0xc0019febe0) Stream added, broadcasting: 5 I0424 13:52:20.118765 6 log.go:172] (0xc0013e4580) Reply frame received for 5 I0424 13:52:20.184565 6 log.go:172] (0xc0013e4580) Data frame received for 5 I0424 13:52:20.184608 6 log.go:172] (0xc0019febe0) (5) Data frame handling I0424 13:52:20.184657 6 log.go:172] (0xc0013e4580) Data frame received for 3 I0424 13:52:20.184680 6 log.go:172] (0xc0029cc640) (3) Data frame handling I0424 13:52:20.184708 6 log.go:172] (0xc0029cc640) (3) Data frame sent I0424 13:52:20.184747 6 log.go:172] (0xc0013e4580) Data frame received for 3 I0424 13:52:20.184763 6 log.go:172] (0xc0029cc640) (3) Data frame handling I0424 13:52:20.187097 6 log.go:172] (0xc0013e4580) Data frame received for 1 I0424 13:52:20.187139 6 log.go:172] (0xc00122b680) (1) Data frame handling I0424 13:52:20.187176 6 log.go:172] (0xc00122b680) (1) Data frame sent I0424 13:52:20.187209 6 log.go:172] (0xc0013e4580) (0xc00122b680) Stream removed, broadcasting: 1 I0424 13:52:20.187232 6 log.go:172] (0xc0013e4580) Go away received I0424 13:52:20.187375 6 log.go:172] (0xc0013e4580) (0xc00122b680) Stream removed, broadcasting: 1 I0424 13:52:20.187401 6 log.go:172] (0xc0013e4580) (0xc0029cc640) Stream removed, broadcasting: 3 I0424 13:52:20.187423 6 log.go:172] (0xc0013e4580) (0xc0019febe0) Stream removed, broadcasting: 5 Apr 24 13:52:20.187: INFO: Exec stderr: "" Apr 24 13:52:20.187: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.187: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.221036 6 log.go:172] (0xc001137550) (0xc0019ff0e0) Create stream I0424 13:52:20.221072 6 log.go:172] (0xc001137550) (0xc0019ff0e0) Stream added, broadcasting: 1 I0424 13:52:20.224061 6 log.go:172] (0xc001137550) Reply frame received for 1 I0424 13:52:20.224123 6 log.go:172] (0xc001137550) (0xc00122b720) Create stream I0424 13:52:20.224146 6 log.go:172] (0xc001137550) (0xc00122b720) Stream added, broadcasting: 3 I0424 13:52:20.225263 6 log.go:172] (0xc001137550) Reply frame received for 3 I0424 13:52:20.225302 6 log.go:172] (0xc001137550) (0xc00122b860) Create stream I0424 13:52:20.225317 6 log.go:172] (0xc001137550) (0xc00122b860) Stream added, broadcasting: 5 I0424 13:52:20.226413 6 log.go:172] (0xc001137550) Reply frame received for 5 I0424 13:52:20.296680 6 log.go:172] (0xc001137550) Data frame received for 3 I0424 13:52:20.296716 6 log.go:172] (0xc00122b720) (3) Data frame handling I0424 13:52:20.296731 6 log.go:172] (0xc00122b720) (3) Data frame sent I0424 13:52:20.296742 6 log.go:172] (0xc001137550) Data frame received for 3 I0424 13:52:20.296755 6 log.go:172] (0xc00122b720) (3) Data frame handling I0424 13:52:20.296840 6 log.go:172] (0xc001137550) Data frame received for 5 I0424 13:52:20.296900 6 log.go:172] (0xc00122b860) (5) Data frame handling I0424 13:52:20.298527 6 log.go:172] (0xc001137550) Data frame received for 1 I0424 13:52:20.298578 6 log.go:172] (0xc0019ff0e0) (1) Data frame handling I0424 13:52:20.298606 6 log.go:172] (0xc0019ff0e0) (1) Data frame sent I0424 13:52:20.298633 6 log.go:172] (0xc001137550) (0xc0019ff0e0) Stream removed, broadcasting: 1 I0424 13:52:20.298697 6 log.go:172] (0xc001137550) Go away received I0424 13:52:20.298754 6 log.go:172] (0xc001137550) (0xc0019ff0e0) Stream removed, broadcasting: 1 I0424 13:52:20.298785 6 log.go:172] (0xc001137550) (0xc00122b720) Stream removed, broadcasting: 3 I0424 13:52:20.298802 6 log.go:172] (0xc001137550) (0xc00122b860) Stream removed, broadcasting: 5 Apr 24 13:52:20.298: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 24 13:52:20.298: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.298: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.330392 6 log.go:172] (0xc00226a0b0) (0xc0029ccaa0) Create stream I0424 13:52:20.330424 6 log.go:172] (0xc00226a0b0) (0xc0029ccaa0) Stream added, broadcasting: 1 I0424 13:52:20.335303 6 log.go:172] (0xc00226a0b0) Reply frame received for 1 I0424 13:52:20.335417 6 log.go:172] (0xc00226a0b0) (0xc0010a6780) Create stream I0424 13:52:20.335485 6 log.go:172] (0xc00226a0b0) (0xc0010a6780) Stream added, broadcasting: 3 I0424 13:52:20.344969 6 log.go:172] (0xc00226a0b0) Reply frame received for 3 I0424 13:52:20.345004 6 log.go:172] (0xc00226a0b0) (0xc0019ff180) Create stream I0424 13:52:20.345017 6 log.go:172] (0xc00226a0b0) (0xc0019ff180) Stream added, broadcasting: 5 I0424 13:52:20.345820 6 log.go:172] (0xc00226a0b0) Reply frame received for 5 I0424 13:52:20.409726 6 log.go:172] (0xc00226a0b0) Data frame received for 5 I0424 13:52:20.409751 6 log.go:172] (0xc0019ff180) (5) Data frame handling I0424 13:52:20.409810 6 log.go:172] (0xc00226a0b0) Data frame received for 3 I0424 13:52:20.409849 6 log.go:172] (0xc0010a6780) (3) Data frame handling I0424 13:52:20.409883 6 log.go:172] (0xc0010a6780) (3) Data frame sent I0424 13:52:20.409915 6 log.go:172] (0xc00226a0b0) Data frame received for 3 I0424 13:52:20.409935 6 log.go:172] (0xc0010a6780) (3) Data frame handling I0424 13:52:20.411616 6 log.go:172] (0xc00226a0b0) Data frame received for 1 I0424 13:52:20.411649 6 log.go:172] (0xc0029ccaa0) (1) Data frame handling I0424 13:52:20.411667 6 log.go:172] (0xc0029ccaa0) (1) Data frame sent I0424 13:52:20.411769 6 log.go:172] (0xc00226a0b0) (0xc0029ccaa0) Stream removed, broadcasting: 1 I0424 13:52:20.411818 6 log.go:172] (0xc00226a0b0) Go away received I0424 13:52:20.412038 6 log.go:172] (0xc00226a0b0) (0xc0029ccaa0) Stream removed, broadcasting: 1 I0424 13:52:20.412060 6 log.go:172] (0xc00226a0b0) (0xc0010a6780) Stream removed, broadcasting: 3 I0424 13:52:20.412081 6 log.go:172] (0xc00226a0b0) (0xc0019ff180) Stream removed, broadcasting: 5 Apr 24 13:52:20.412: INFO: Exec stderr: "" Apr 24 13:52:20.412: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.412: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.442980 6 log.go:172] (0xc00226ad10) (0xc0029cce60) Create stream I0424 13:52:20.443009 6 log.go:172] (0xc00226ad10) (0xc0029cce60) Stream added, broadcasting: 1 I0424 13:52:20.445512 6 log.go:172] (0xc00226ad10) Reply frame received for 1 I0424 13:52:20.445549 6 log.go:172] (0xc00226ad10) (0xc0002f55e0) Create stream I0424 13:52:20.445565 6 log.go:172] (0xc00226ad10) (0xc0002f55e0) Stream added, broadcasting: 3 I0424 13:52:20.446525 6 log.go:172] (0xc00226ad10) Reply frame received for 3 I0424 13:52:20.446548 6 log.go:172] (0xc00226ad10) (0xc0029ccf00) Create stream I0424 13:52:20.446559 6 log.go:172] (0xc00226ad10) (0xc0029ccf00) Stream added, broadcasting: 5 I0424 13:52:20.447462 6 log.go:172] (0xc00226ad10) Reply frame received for 5 I0424 13:52:20.517648 6 log.go:172] (0xc00226ad10) Data frame received for 3 I0424 13:52:20.517683 6 log.go:172] (0xc0002f55e0) (3) Data frame handling I0424 13:52:20.517704 6 log.go:172] (0xc0002f55e0) (3) Data frame sent I0424 13:52:20.517715 6 log.go:172] (0xc00226ad10) Data frame received for 3 I0424 13:52:20.517725 6 log.go:172] (0xc0002f55e0) (3) Data frame handling I0424 13:52:20.517755 6 log.go:172] (0xc00226ad10) Data frame received for 5 I0424 13:52:20.517783 6 log.go:172] (0xc0029ccf00) (5) Data frame handling I0424 13:52:20.519552 6 log.go:172] (0xc00226ad10) Data frame received for 1 I0424 13:52:20.519578 6 log.go:172] (0xc0029cce60) (1) Data frame handling I0424 13:52:20.519606 6 log.go:172] (0xc0029cce60) (1) Data frame sent I0424 13:52:20.519620 6 log.go:172] (0xc00226ad10) (0xc0029cce60) Stream removed, broadcasting: 1 I0424 13:52:20.519638 6 log.go:172] (0xc00226ad10) Go away received I0424 13:52:20.519793 6 log.go:172] (0xc00226ad10) (0xc0029cce60) Stream removed, broadcasting: 1 I0424 13:52:20.519821 6 log.go:172] (0xc00226ad10) (0xc0002f55e0) Stream removed, broadcasting: 3 I0424 13:52:20.519857 6 log.go:172] (0xc00226ad10) (0xc0029ccf00) Stream removed, broadcasting: 5 Apr 24 13:52:20.519: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 24 13:52:20.519: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.519: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.556207 6 log.go:172] (0xc00243a2c0) (0xc0019ff720) Create stream I0424 13:52:20.556242 6 log.go:172] (0xc00243a2c0) (0xc0019ff720) Stream added, broadcasting: 1 I0424 13:52:20.559444 6 log.go:172] (0xc00243a2c0) Reply frame received for 1 I0424 13:52:20.559479 6 log.go:172] (0xc00243a2c0) (0xc0029ccfa0) Create stream I0424 13:52:20.559490 6 log.go:172] (0xc00243a2c0) (0xc0029ccfa0) Stream added, broadcasting: 3 I0424 13:52:20.560497 6 log.go:172] (0xc00243a2c0) Reply frame received for 3 I0424 13:52:20.560537 6 log.go:172] (0xc00243a2c0) (0xc0029cd040) Create stream I0424 13:52:20.560549 6 log.go:172] (0xc00243a2c0) (0xc0029cd040) Stream added, broadcasting: 5 I0424 13:52:20.561670 6 log.go:172] (0xc00243a2c0) Reply frame received for 5 I0424 13:52:20.630298 6 log.go:172] (0xc00243a2c0) Data frame received for 5 I0424 13:52:20.630349 6 log.go:172] (0xc00243a2c0) Data frame received for 3 I0424 13:52:20.630396 6 log.go:172] (0xc0029ccfa0) (3) Data frame handling I0424 13:52:20.630416 6 log.go:172] (0xc0029ccfa0) (3) Data frame sent I0424 13:52:20.630426 6 log.go:172] (0xc00243a2c0) Data frame received for 3 I0424 13:52:20.630437 6 log.go:172] (0xc0029ccfa0) (3) Data frame handling I0424 13:52:20.630464 6 log.go:172] (0xc0029cd040) (5) Data frame handling I0424 13:52:20.632008 6 log.go:172] (0xc00243a2c0) Data frame received for 1 I0424 13:52:20.632038 6 log.go:172] (0xc0019ff720) (1) Data frame handling I0424 13:52:20.632062 6 log.go:172] (0xc0019ff720) (1) Data frame sent I0424 13:52:20.632080 6 log.go:172] (0xc00243a2c0) (0xc0019ff720) Stream removed, broadcasting: 1 I0424 13:52:20.632103 6 log.go:172] (0xc00243a2c0) Go away received I0424 13:52:20.632218 6 log.go:172] (0xc00243a2c0) (0xc0019ff720) Stream removed, broadcasting: 1 I0424 13:52:20.632234 6 log.go:172] (0xc00243a2c0) (0xc0029ccfa0) Stream removed, broadcasting: 3 I0424 13:52:20.632241 6 log.go:172] (0xc00243a2c0) (0xc0029cd040) Stream removed, broadcasting: 5 Apr 24 13:52:20.632: INFO: Exec stderr: "" Apr 24 13:52:20.632: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.632: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.657867 6 log.go:172] (0xc002426790) (0xc0010a6be0) Create stream I0424 13:52:20.657892 6 log.go:172] (0xc002426790) (0xc0010a6be0) Stream added, broadcasting: 1 I0424 13:52:20.660840 6 log.go:172] (0xc002426790) Reply frame received for 1 I0424 13:52:20.660879 6 log.go:172] (0xc002426790) (0xc0002f5900) Create stream I0424 13:52:20.660894 6 log.go:172] (0xc002426790) (0xc0002f5900) Stream added, broadcasting: 3 I0424 13:52:20.662133 6 log.go:172] (0xc002426790) Reply frame received for 3 I0424 13:52:20.662193 6 log.go:172] (0xc002426790) (0xc0002f59a0) Create stream I0424 13:52:20.662213 6 log.go:172] (0xc002426790) (0xc0002f59a0) Stream added, broadcasting: 5 I0424 13:52:20.663287 6 log.go:172] (0xc002426790) Reply frame received for 5 I0424 13:52:20.733979 6 log.go:172] (0xc002426790) Data frame received for 3 I0424 13:52:20.734012 6 log.go:172] (0xc0002f5900) (3) Data frame handling I0424 13:52:20.734022 6 log.go:172] (0xc0002f5900) (3) Data frame sent I0424 13:52:20.734030 6 log.go:172] (0xc002426790) Data frame received for 3 I0424 13:52:20.734035 6 log.go:172] (0xc0002f5900) (3) Data frame handling I0424 13:52:20.734056 6 log.go:172] (0xc002426790) Data frame received for 5 I0424 13:52:20.734064 6 log.go:172] (0xc0002f59a0) (5) Data frame handling I0424 13:52:20.735063 6 log.go:172] (0xc002426790) Data frame received for 1 I0424 13:52:20.735084 6 log.go:172] (0xc0010a6be0) (1) Data frame handling I0424 13:52:20.735099 6 log.go:172] (0xc0010a6be0) (1) Data frame sent I0424 13:52:20.735115 6 log.go:172] (0xc002426790) (0xc0010a6be0) Stream removed, broadcasting: 1 I0424 13:52:20.735164 6 log.go:172] (0xc002426790) Go away received I0424 13:52:20.735335 6 log.go:172] (0xc002426790) (0xc0010a6be0) Stream removed, broadcasting: 1 I0424 13:52:20.735374 6 log.go:172] (0xc002426790) (0xc0002f5900) Stream removed, broadcasting: 3 I0424 13:52:20.735399 6 log.go:172] (0xc002426790) (0xc0002f59a0) Stream removed, broadcasting: 5 Apr 24 13:52:20.735: INFO: Exec stderr: "" Apr 24 13:52:20.735: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.735: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.760779 6 log.go:172] (0xc002427080) (0xc0010a7680) Create stream I0424 13:52:20.760810 6 log.go:172] (0xc002427080) (0xc0010a7680) Stream added, broadcasting: 1 I0424 13:52:20.763412 6 log.go:172] (0xc002427080) Reply frame received for 1 I0424 13:52:20.763477 6 log.go:172] (0xc002427080) (0xc00122b900) Create stream I0424 13:52:20.763496 6 log.go:172] (0xc002427080) (0xc00122b900) Stream added, broadcasting: 3 I0424 13:52:20.764630 6 log.go:172] (0xc002427080) Reply frame received for 3 I0424 13:52:20.764669 6 log.go:172] (0xc002427080) (0xc00122ba40) Create stream I0424 13:52:20.764682 6 log.go:172] (0xc002427080) (0xc00122ba40) Stream added, broadcasting: 5 I0424 13:52:20.765907 6 log.go:172] (0xc002427080) Reply frame received for 5 I0424 13:52:20.819128 6 log.go:172] (0xc002427080) Data frame received for 3 I0424 13:52:20.819169 6 log.go:172] (0xc00122b900) (3) Data frame handling I0424 13:52:20.819199 6 log.go:172] (0xc00122b900) (3) Data frame sent I0424 13:52:20.819215 6 log.go:172] (0xc002427080) Data frame received for 3 I0424 13:52:20.819227 6 log.go:172] (0xc00122b900) (3) Data frame handling I0424 13:52:20.819272 6 log.go:172] (0xc002427080) Data frame received for 5 I0424 13:52:20.819287 6 log.go:172] (0xc00122ba40) (5) Data frame handling I0424 13:52:20.821352 6 log.go:172] (0xc002427080) Data frame received for 1 I0424 13:52:20.821388 6 log.go:172] (0xc0010a7680) (1) Data frame handling I0424 13:52:20.821429 6 log.go:172] (0xc0010a7680) (1) Data frame sent I0424 13:52:20.821470 6 log.go:172] (0xc002427080) (0xc0010a7680) Stream removed, broadcasting: 1 I0424 13:52:20.821498 6 log.go:172] (0xc002427080) Go away received I0424 13:52:20.821607 6 log.go:172] (0xc002427080) (0xc0010a7680) Stream removed, broadcasting: 1 I0424 13:52:20.821633 6 log.go:172] (0xc002427080) (0xc00122b900) Stream removed, broadcasting: 3 I0424 13:52:20.821654 6 log.go:172] (0xc002427080) (0xc00122ba40) Stream removed, broadcasting: 5 Apr 24 13:52:20.821: INFO: Exec stderr: "" Apr 24 13:52:20.821: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3732 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 13:52:20.821: INFO: >>> kubeConfig: /root/.kube/config I0424 13:52:20.855266 6 log.go:172] (0xc0013e5ce0) (0xc00122bd60) Create stream I0424 13:52:20.855301 6 log.go:172] (0xc0013e5ce0) (0xc00122bd60) Stream added, broadcasting: 1 I0424 13:52:20.857977 6 log.go:172] (0xc0013e5ce0) Reply frame received for 1 I0424 13:52:20.858048 6 log.go:172] (0xc0013e5ce0) (0xc0010a7b80) Create stream I0424 13:52:20.858066 6 log.go:172] (0xc0013e5ce0) (0xc0010a7b80) Stream added, broadcasting: 3 I0424 13:52:20.859073 6 log.go:172] (0xc0013e5ce0) Reply frame received for 3 I0424 13:52:20.859131 6 log.go:172] (0xc0013e5ce0) (0xc00117a0a0) Create stream I0424 13:52:20.859146 6 log.go:172] (0xc0013e5ce0) (0xc00117a0a0) Stream added, broadcasting: 5 I0424 13:52:20.860001 6 log.go:172] (0xc0013e5ce0) Reply frame received for 5 I0424 13:52:20.918715 6 log.go:172] (0xc0013e5ce0) Data frame received for 5 I0424 13:52:20.918752 6 log.go:172] (0xc00117a0a0) (5) Data frame handling I0424 13:52:20.918778 6 log.go:172] (0xc0013e5ce0) Data frame received for 3 I0424 13:52:20.918792 6 log.go:172] (0xc0010a7b80) (3) Data frame handling I0424 13:52:20.918807 6 log.go:172] (0xc0010a7b80) (3) Data frame sent I0424 13:52:20.918824 6 log.go:172] (0xc0013e5ce0) Data frame received for 3 I0424 13:52:20.918842 6 log.go:172] (0xc0010a7b80) (3) Data frame handling I0424 13:52:20.920667 6 log.go:172] (0xc0013e5ce0) Data frame received for 1 I0424 13:52:20.920697 6 log.go:172] (0xc00122bd60) (1) Data frame handling I0424 13:52:20.920712 6 log.go:172] (0xc00122bd60) (1) Data frame sent I0424 13:52:20.920730 6 log.go:172] (0xc0013e5ce0) (0xc00122bd60) Stream removed, broadcasting: 1 I0424 13:52:20.920748 6 log.go:172] (0xc0013e5ce0) Go away received I0424 13:52:20.920916 6 log.go:172] (0xc0013e5ce0) (0xc00122bd60) Stream removed, broadcasting: 1 I0424 13:52:20.920956 6 log.go:172] (0xc0013e5ce0) (0xc0010a7b80) Stream removed, broadcasting: 3 I0424 13:52:20.920983 6 log.go:172] (0xc0013e5ce0) (0xc00117a0a0) Stream removed, broadcasting: 5 Apr 24 13:52:20.921: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:52:20.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3732" for this suite. Apr 24 13:53:02.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:53:03.021: INFO: namespace e2e-kubelet-etc-hosts-3732 deletion completed in 42.095986817s • [SLOW TEST:53.337 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:53:03.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 13:53:03.105: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 26.555733ms) Apr 24 13:53:03.109: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.380377ms) Apr 24 13:53:03.113: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.152298ms) Apr 24 13:53:03.117: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.885033ms) Apr 24 13:53:03.121: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.861002ms) Apr 24 13:53:03.124: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.454822ms) Apr 24 13:53:03.128: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.551079ms) Apr 24 13:53:03.132: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.472904ms) Apr 24 13:53:03.135: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.275395ms) Apr 24 13:53:03.139: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.611601ms) Apr 24 13:53:03.142: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.562386ms) Apr 24 13:53:03.146: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.640103ms) Apr 24 13:53:03.150: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.244942ms) Apr 24 13:53:03.154: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.759076ms) Apr 24 13:53:03.158: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.718372ms) Apr 24 13:53:03.161: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.681815ms) Apr 24 13:53:03.165: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.577371ms) Apr 24 13:53:03.168: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.213297ms) Apr 24 13:53:03.172: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.346496ms) Apr 24 13:53:03.175: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.320117ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:53:03.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7600" for this suite. Apr 24 13:53:09.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:53:09.267: INFO: namespace proxy-7600 deletion completed in 6.088575093s • [SLOW TEST:6.244 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:53:09.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 13:53:09.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1058' Apr 24 13:53:11.770: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 13:53:11.770: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 24 13:53:13.806: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-jgmqr] Apr 24 13:53:13.806: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-jgmqr" in namespace "kubectl-1058" to be "running and ready" Apr 24 13:53:13.809: INFO: Pod "e2e-test-nginx-rc-jgmqr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017129ms Apr 24 13:53:15.813: INFO: Pod "e2e-test-nginx-rc-jgmqr": Phase="Running", Reason="", readiness=true. Elapsed: 2.007303503s Apr 24 13:53:15.813: INFO: Pod "e2e-test-nginx-rc-jgmqr" satisfied condition "running and ready" Apr 24 13:53:15.813: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-jgmqr] Apr 24 13:53:15.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1058' Apr 24 13:53:15.943: INFO: stderr: "" Apr 24 13:53:15.943: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 24 13:53:15.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1058' Apr 24 13:53:16.035: INFO: stderr: "" Apr 24 13:53:16.035: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:53:16.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1058" for this suite. Apr 24 13:53:38.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:53:38.144: INFO: namespace kubectl-1058 deletion completed in 22.105862993s • [SLOW TEST:28.876 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:53:38.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-19 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 24 13:53:38.252: INFO: Found 0 stateful pods, waiting for 3 Apr 24 13:53:48.257: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:53:48.257: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:53:48.257: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 24 13:53:48.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-19 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:53:48.632: INFO: stderr: "I0424 13:53:48.495037 1306 log.go:172] (0xc000a46370) (0xc0006288c0) Create stream\nI0424 13:53:48.495076 1306 log.go:172] (0xc000a46370) (0xc0006288c0) Stream added, broadcasting: 1\nI0424 13:53:48.497572 1306 log.go:172] (0xc000a46370) Reply frame received for 1\nI0424 13:53:48.497625 1306 log.go:172] (0xc000a46370) (0xc0008dc000) Create stream\nI0424 13:53:48.497639 1306 log.go:172] (0xc000a46370) (0xc0008dc000) Stream added, broadcasting: 3\nI0424 13:53:48.498767 1306 log.go:172] (0xc000a46370) Reply frame received for 3\nI0424 13:53:48.498833 1306 log.go:172] (0xc000a46370) (0xc000926000) Create stream\nI0424 13:53:48.498861 1306 log.go:172] (0xc000a46370) (0xc000926000) Stream added, broadcasting: 5\nI0424 13:53:48.499851 1306 log.go:172] (0xc000a46370) Reply frame received for 5\nI0424 13:53:48.587755 1306 log.go:172] (0xc000a46370) Data frame received for 5\nI0424 13:53:48.587794 1306 log.go:172] (0xc000926000) (5) Data frame handling\nI0424 13:53:48.587821 1306 log.go:172] (0xc000926000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:53:48.623741 1306 log.go:172] (0xc000a46370) Data frame received for 3\nI0424 13:53:48.623770 1306 log.go:172] (0xc0008dc000) (3) Data frame handling\nI0424 13:53:48.623788 1306 log.go:172] (0xc0008dc000) (3) Data frame sent\nI0424 13:53:48.623795 1306 log.go:172] (0xc000a46370) Data frame received for 3\nI0424 13:53:48.623800 1306 log.go:172] (0xc0008dc000) (3) Data frame handling\nI0424 13:53:48.623826 1306 log.go:172] (0xc000a46370) Data frame received for 5\nI0424 13:53:48.623839 1306 log.go:172] (0xc000926000) (5) Data frame handling\nI0424 13:53:48.625594 1306 log.go:172] (0xc000a46370) Data frame received for 1\nI0424 13:53:48.625607 1306 log.go:172] (0xc0006288c0) (1) Data frame handling\nI0424 13:53:48.625617 1306 log.go:172] (0xc0006288c0) (1) Data frame sent\nI0424 13:53:48.625775 1306 log.go:172] (0xc000a46370) (0xc0006288c0) Stream removed, broadcasting: 1\nI0424 13:53:48.625808 1306 log.go:172] (0xc000a46370) Go away received\nI0424 13:53:48.626065 1306 log.go:172] (0xc000a46370) (0xc0006288c0) Stream removed, broadcasting: 1\nI0424 13:53:48.626078 1306 log.go:172] (0xc000a46370) (0xc0008dc000) Stream removed, broadcasting: 3\nI0424 13:53:48.626085 1306 log.go:172] (0xc000a46370) (0xc000926000) Stream removed, broadcasting: 5\n" Apr 24 13:53:48.632: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:53:48.632: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 24 13:53:58.660: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 24 13:54:08.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-19 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:54:08.997: INFO: stderr: "I0424 13:54:08.928817 1330 log.go:172] (0xc000116dc0) (0xc0009c2640) Create stream\nI0424 13:54:08.928901 1330 log.go:172] (0xc000116dc0) (0xc0009c2640) Stream added, broadcasting: 1\nI0424 13:54:08.931361 1330 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0424 13:54:08.931424 1330 log.go:172] (0xc000116dc0) (0xc0005801e0) Create stream\nI0424 13:54:08.931443 1330 log.go:172] (0xc000116dc0) (0xc0005801e0) Stream added, broadcasting: 3\nI0424 13:54:08.932365 1330 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0424 13:54:08.932403 1330 log.go:172] (0xc000116dc0) (0xc000580280) Create stream\nI0424 13:54:08.932420 1330 log.go:172] (0xc000116dc0) (0xc000580280) Stream added, broadcasting: 5\nI0424 13:54:08.933634 1330 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0424 13:54:08.990305 1330 log.go:172] (0xc000116dc0) Data frame received for 5\nI0424 13:54:08.990374 1330 log.go:172] (0xc000580280) (5) Data frame handling\nI0424 13:54:08.990398 1330 log.go:172] (0xc000580280) (5) Data frame sent\nI0424 13:54:08.990422 1330 log.go:172] (0xc000116dc0) Data frame received for 5\nI0424 13:54:08.990443 1330 log.go:172] (0xc000580280) (5) Data frame handling\nI0424 13:54:08.990468 1330 log.go:172] (0xc000116dc0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:54:08.990490 1330 log.go:172] (0xc0005801e0) (3) Data frame handling\nI0424 13:54:08.990552 1330 log.go:172] (0xc0005801e0) (3) Data frame sent\nI0424 13:54:08.990575 1330 log.go:172] (0xc000116dc0) Data frame received for 3\nI0424 13:54:08.990614 1330 log.go:172] (0xc0005801e0) (3) Data frame handling\nI0424 13:54:08.991734 1330 log.go:172] (0xc000116dc0) Data frame received for 1\nI0424 13:54:08.991788 1330 log.go:172] (0xc0009c2640) (1) Data frame handling\nI0424 13:54:08.991828 1330 log.go:172] (0xc0009c2640) (1) Data frame sent\nI0424 13:54:08.991857 1330 log.go:172] (0xc000116dc0) (0xc0009c2640) Stream removed, broadcasting: 1\nI0424 13:54:08.991896 1330 log.go:172] (0xc000116dc0) Go away received\nI0424 13:54:08.992396 1330 log.go:172] (0xc000116dc0) (0xc0009c2640) Stream removed, broadcasting: 1\nI0424 13:54:08.992420 1330 log.go:172] (0xc000116dc0) (0xc0005801e0) Stream removed, broadcasting: 3\nI0424 13:54:08.992431 1330 log.go:172] (0xc000116dc0) (0xc000580280) Stream removed, broadcasting: 5\n" Apr 24 13:54:08.997: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:54:08.997: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:54:19.020: INFO: Waiting for StatefulSet statefulset-19/ss2 to complete update Apr 24 13:54:19.020: INFO: Waiting for Pod statefulset-19/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 13:54:19.020: INFO: Waiting for Pod statefulset-19/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 13:54:19.020: INFO: Waiting for Pod statefulset-19/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 13:54:29.029: INFO: Waiting for StatefulSet statefulset-19/ss2 to complete update Apr 24 13:54:29.029: INFO: Waiting for Pod statefulset-19/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 13:54:29.029: INFO: Waiting for Pod statefulset-19/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 13:54:39.026: INFO: Waiting for StatefulSet statefulset-19/ss2 to complete update Apr 24 13:54:39.026: INFO: Waiting for Pod statefulset-19/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 24 13:54:49.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-19 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 24 13:54:49.296: INFO: stderr: "I0424 13:54:49.149477 1350 log.go:172] (0xc000140dc0) (0xc0008cc640) Create stream\nI0424 13:54:49.149569 1350 log.go:172] (0xc000140dc0) (0xc0008cc640) Stream added, broadcasting: 1\nI0424 13:54:49.151649 1350 log.go:172] (0xc000140dc0) Reply frame received for 1\nI0424 13:54:49.151689 1350 log.go:172] (0xc000140dc0) (0xc0009bc000) Create stream\nI0424 13:54:49.151701 1350 log.go:172] (0xc000140dc0) (0xc0009bc000) Stream added, broadcasting: 3\nI0424 13:54:49.152695 1350 log.go:172] (0xc000140dc0) Reply frame received for 3\nI0424 13:54:49.152722 1350 log.go:172] (0xc000140dc0) (0xc0008cc6e0) Create stream\nI0424 13:54:49.152731 1350 log.go:172] (0xc000140dc0) (0xc0008cc6e0) Stream added, broadcasting: 5\nI0424 13:54:49.158023 1350 log.go:172] (0xc000140dc0) Reply frame received for 5\nI0424 13:54:49.257717 1350 log.go:172] (0xc000140dc0) Data frame received for 5\nI0424 13:54:49.257743 1350 log.go:172] (0xc0008cc6e0) (5) Data frame handling\nI0424 13:54:49.257759 1350 log.go:172] (0xc0008cc6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0424 13:54:49.287366 1350 log.go:172] (0xc000140dc0) Data frame received for 3\nI0424 13:54:49.287413 1350 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0424 13:54:49.287460 1350 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0424 13:54:49.287491 1350 log.go:172] (0xc000140dc0) Data frame received for 5\nI0424 13:54:49.287533 1350 log.go:172] (0xc0008cc6e0) (5) Data frame handling\nI0424 13:54:49.287571 1350 log.go:172] (0xc000140dc0) Data frame received for 3\nI0424 13:54:49.287589 1350 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0424 13:54:49.289555 1350 log.go:172] (0xc000140dc0) Data frame received for 1\nI0424 13:54:49.289585 1350 log.go:172] (0xc0008cc640) (1) Data frame handling\nI0424 13:54:49.289603 1350 log.go:172] (0xc0008cc640) (1) Data frame sent\nI0424 13:54:49.289624 1350 log.go:172] (0xc000140dc0) (0xc0008cc640) Stream removed, broadcasting: 1\nI0424 13:54:49.289642 1350 log.go:172] (0xc000140dc0) Go away received\nI0424 13:54:49.290256 1350 log.go:172] (0xc000140dc0) (0xc0008cc640) Stream removed, broadcasting: 1\nI0424 13:54:49.290279 1350 log.go:172] (0xc000140dc0) (0xc0009bc000) Stream removed, broadcasting: 3\nI0424 13:54:49.290295 1350 log.go:172] (0xc000140dc0) (0xc0008cc6e0) Stream removed, broadcasting: 5\n" Apr 24 13:54:49.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 24 13:54:49.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 24 13:54:59.361: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 24 13:55:09.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-19 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 24 13:55:09.624: INFO: stderr: "I0424 13:55:09.526850 1369 log.go:172] (0xc00096c420) (0xc0008fe5a0) Create stream\nI0424 13:55:09.526907 1369 log.go:172] (0xc00096c420) (0xc0008fe5a0) Stream added, broadcasting: 1\nI0424 13:55:09.529435 1369 log.go:172] (0xc00096c420) Reply frame received for 1\nI0424 13:55:09.529494 1369 log.go:172] (0xc00096c420) (0xc0008fe6e0) Create stream\nI0424 13:55:09.529510 1369 log.go:172] (0xc00096c420) (0xc0008fe6e0) Stream added, broadcasting: 3\nI0424 13:55:09.530525 1369 log.go:172] (0xc00096c420) Reply frame received for 3\nI0424 13:55:09.530570 1369 log.go:172] (0xc00096c420) (0xc000750000) Create stream\nI0424 13:55:09.530586 1369 log.go:172] (0xc00096c420) (0xc000750000) Stream added, broadcasting: 5\nI0424 13:55:09.531701 1369 log.go:172] (0xc00096c420) Reply frame received for 5\nI0424 13:55:09.618343 1369 log.go:172] (0xc00096c420) Data frame received for 5\nI0424 13:55:09.618378 1369 log.go:172] (0xc000750000) (5) Data frame handling\nI0424 13:55:09.618389 1369 log.go:172] (0xc000750000) (5) Data frame sent\nI0424 13:55:09.618395 1369 log.go:172] (0xc00096c420) Data frame received for 5\nI0424 13:55:09.618400 1369 log.go:172] (0xc000750000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0424 13:55:09.618431 1369 log.go:172] (0xc00096c420) Data frame received for 3\nI0424 13:55:09.618449 1369 log.go:172] (0xc0008fe6e0) (3) Data frame handling\nI0424 13:55:09.618482 1369 log.go:172] (0xc0008fe6e0) (3) Data frame sent\nI0424 13:55:09.618500 1369 log.go:172] (0xc00096c420) Data frame received for 3\nI0424 13:55:09.618513 1369 log.go:172] (0xc0008fe6e0) (3) Data frame handling\nI0424 13:55:09.619945 1369 log.go:172] (0xc00096c420) Data frame received for 1\nI0424 13:55:09.619962 1369 log.go:172] (0xc0008fe5a0) (1) Data frame handling\nI0424 13:55:09.619988 1369 log.go:172] (0xc0008fe5a0) (1) Data frame sent\nI0424 13:55:09.620000 1369 log.go:172] (0xc00096c420) (0xc0008fe5a0) Stream removed, broadcasting: 1\nI0424 13:55:09.620013 1369 log.go:172] (0xc00096c420) Go away received\nI0424 13:55:09.620408 1369 log.go:172] (0xc00096c420) (0xc0008fe5a0) Stream removed, broadcasting: 1\nI0424 13:55:09.620440 1369 log.go:172] (0xc00096c420) (0xc0008fe6e0) Stream removed, broadcasting: 3\nI0424 13:55:09.620449 1369 log.go:172] (0xc00096c420) (0xc000750000) Stream removed, broadcasting: 5\n" Apr 24 13:55:09.624: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 24 13:55:09.624: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 24 13:55:19.644: INFO: Waiting for StatefulSet statefulset-19/ss2 to complete update Apr 24 13:55:19.644: INFO: Waiting for Pod statefulset-19/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 24 13:55:19.644: INFO: Waiting for Pod statefulset-19/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 24 13:55:19.644: INFO: Waiting for Pod statefulset-19/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 24 13:55:29.653: INFO: Waiting for StatefulSet statefulset-19/ss2 to complete update Apr 24 13:55:29.653: INFO: Waiting for Pod statefulset-19/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 24 13:55:39.653: INFO: Deleting all statefulset in ns statefulset-19 Apr 24 13:55:39.656: INFO: Scaling statefulset ss2 to 0 Apr 24 13:55:59.673: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 13:55:59.677: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:55:59.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-19" for this suite. Apr 24 13:56:07.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:56:07.784: INFO: namespace statefulset-19 deletion completed in 8.09206946s • [SLOW TEST:149.640 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:56:07.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-4525ca65-64d9-45b6-b8b6-2c60df546d08 STEP: Creating secret with name s-test-opt-upd-5bc2c9de-fb61-4564-9c85-afed47a97e10 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4525ca65-64d9-45b6-b8b6-2c60df546d08 STEP: Updating secret s-test-opt-upd-5bc2c9de-fb61-4564-9c85-afed47a97e10 STEP: Creating secret with name s-test-opt-create-881cbf64-8be3-4a4d-8c00-08fdc98ea414 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:57:24.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4247" for this suite. Apr 24 13:57:46.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:57:46.878: INFO: namespace projected-4247 deletion completed in 22.124790265s • [SLOW TEST:99.093 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:57:46.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 13:57:46.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3426' Apr 24 13:57:47.026: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 13:57:47.026: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 24 13:57:47.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3426' Apr 24 13:57:47.166: INFO: stderr: "" Apr 24 13:57:47.166: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:57:47.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3426" for this suite. Apr 24 13:57:53.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:57:53.273: INFO: namespace kubectl-3426 deletion completed in 6.103161021s • [SLOW TEST:6.394 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:57:53.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 13:57:53.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea" in namespace "projected-7213" to be "success or failure" Apr 24 13:57:53.359: INFO: Pod "downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516968ms Apr 24 13:57:55.363: INFO: Pod "downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00886416s Apr 24 13:57:57.367: INFO: Pod "downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013252002s STEP: Saw pod success Apr 24 13:57:57.368: INFO: Pod "downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea" satisfied condition "success or failure" Apr 24 13:57:57.370: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea container client-container: STEP: delete the pod Apr 24 13:57:57.384: INFO: Waiting for pod downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea to disappear Apr 24 13:57:57.389: INFO: Pod downwardapi-volume-e9284da5-7889-44c1-827f-9076187170ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:57:57.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7213" for this suite. Apr 24 13:58:03.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:58:03.513: INFO: namespace projected-7213 deletion completed in 6.121475999s • [SLOW TEST:10.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:58:03.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 24 13:58:03.601: INFO: Waiting up to 5m0s for pod "pod-febd780e-5e42-468c-ba1c-48c9e99d176d" in namespace "emptydir-8575" to be "success or failure" Apr 24 13:58:03.605: INFO: Pod "pod-febd780e-5e42-468c-ba1c-48c9e99d176d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476699ms Apr 24 13:58:05.634: INFO: Pod "pod-febd780e-5e42-468c-ba1c-48c9e99d176d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033124251s Apr 24 13:58:07.638: INFO: Pod "pod-febd780e-5e42-468c-ba1c-48c9e99d176d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03770689s STEP: Saw pod success Apr 24 13:58:07.638: INFO: Pod "pod-febd780e-5e42-468c-ba1c-48c9e99d176d" satisfied condition "success or failure" Apr 24 13:58:07.642: INFO: Trying to get logs from node iruya-worker2 pod pod-febd780e-5e42-468c-ba1c-48c9e99d176d container test-container: STEP: delete the pod Apr 24 13:58:07.714: INFO: Waiting for pod pod-febd780e-5e42-468c-ba1c-48c9e99d176d to disappear Apr 24 13:58:07.718: INFO: Pod pod-febd780e-5e42-468c-ba1c-48c9e99d176d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:58:07.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8575" for this suite. Apr 24 13:58:13.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:58:13.830: INFO: namespace emptydir-8575 deletion completed in 6.083651384s • [SLOW TEST:10.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:58:13.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 24 13:58:13.864: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 24 13:58:13.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9891' Apr 24 13:58:14.256: INFO: stderr: "" Apr 24 13:58:14.256: INFO: stdout: "service/redis-slave created\n" Apr 24 13:58:14.256: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 24 13:58:14.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9891' Apr 24 13:58:14.532: INFO: stderr: "" Apr 24 13:58:14.532: INFO: stdout: "service/redis-master created\n" Apr 24 13:58:14.532: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 24 13:58:14.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9891' Apr 24 13:58:14.820: INFO: stderr: "" Apr 24 13:58:14.820: INFO: stdout: "service/frontend created\n" Apr 24 13:58:14.820: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 24 13:58:14.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9891' Apr 24 13:58:15.061: INFO: stderr: "" Apr 24 13:58:15.061: INFO: stdout: "deployment.apps/frontend created\n" Apr 24 13:58:15.061: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 24 13:58:15.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9891' Apr 24 13:58:15.352: INFO: stderr: "" Apr 24 13:58:15.352: INFO: stdout: "deployment.apps/redis-master created\n" Apr 24 13:58:15.352: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 24 13:58:15.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9891' Apr 24 13:58:15.607: INFO: stderr: "" Apr 24 13:58:15.607: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 24 13:58:15.607: INFO: Waiting for all frontend pods to be Running. Apr 24 13:58:25.658: INFO: Waiting for frontend to serve content. Apr 24 13:58:25.676: INFO: Trying to add a new entry to the guestbook. Apr 24 13:58:25.692: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 24 13:58:25.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9891' Apr 24 13:58:25.879: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:58:25.879: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 24 13:58:25.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9891' Apr 24 13:58:26.018: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:58:26.019: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 24 13:58:26.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9891' Apr 24 13:58:26.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:58:26.162: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 24 13:58:26.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9891' Apr 24 13:58:26.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:58:26.250: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 24 13:58:26.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9891' Apr 24 13:58:26.372: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:58:26.372: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 24 13:58:26.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9891' Apr 24 13:58:26.498: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:58:26.498: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:58:26.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9891" for this suite. Apr 24 13:59:04.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:59:04.728: INFO: namespace kubectl-9891 deletion completed in 38.168868656s • [SLOW TEST:50.898 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:59:04.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 24 13:59:04.807: INFO: Waiting up to 5m0s for pod "pod-10cb8228-0dd5-4456-929b-a804792f0bd5" in namespace "emptydir-8766" to be "success or failure" Apr 24 13:59:04.824: INFO: Pod "pod-10cb8228-0dd5-4456-929b-a804792f0bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.399756ms Apr 24 13:59:06.829: INFO: Pod "pod-10cb8228-0dd5-4456-929b-a804792f0bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021908246s Apr 24 13:59:08.833: INFO: Pod "pod-10cb8228-0dd5-4456-929b-a804792f0bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026130342s STEP: Saw pod success Apr 24 13:59:08.833: INFO: Pod "pod-10cb8228-0dd5-4456-929b-a804792f0bd5" satisfied condition "success or failure" Apr 24 13:59:08.836: INFO: Trying to get logs from node iruya-worker2 pod pod-10cb8228-0dd5-4456-929b-a804792f0bd5 container test-container: STEP: delete the pod Apr 24 13:59:08.855: INFO: Waiting for pod pod-10cb8228-0dd5-4456-929b-a804792f0bd5 to disappear Apr 24 13:59:08.859: INFO: Pod pod-10cb8228-0dd5-4456-929b-a804792f0bd5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:59:08.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8766" for this suite. Apr 24 13:59:14.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:59:14.944: INFO: namespace emptydir-8766 deletion completed in 6.082387486s • [SLOW TEST:10.216 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:59:14.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 24 13:59:14.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2590' Apr 24 13:59:15.249: INFO: stderr: "" Apr 24 13:59:15.249: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 13:59:15.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2590' Apr 24 13:59:15.362: INFO: stderr: "" Apr 24 13:59:15.362: INFO: stdout: "update-demo-nautilus-sbgkp update-demo-nautilus-vld5n " Apr 24 13:59:15.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbgkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:15.446: INFO: stderr: "" Apr 24 13:59:15.447: INFO: stdout: "" Apr 24 13:59:15.447: INFO: update-demo-nautilus-sbgkp is created but not running Apr 24 13:59:20.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2590' Apr 24 13:59:20.546: INFO: stderr: "" Apr 24 13:59:20.546: INFO: stdout: "update-demo-nautilus-sbgkp update-demo-nautilus-vld5n " Apr 24 13:59:20.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbgkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:20.635: INFO: stderr: "" Apr 24 13:59:20.635: INFO: stdout: "true" Apr 24 13:59:20.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbgkp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:20.727: INFO: stderr: "" Apr 24 13:59:20.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:59:20.728: INFO: validating pod update-demo-nautilus-sbgkp Apr 24 13:59:20.731: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:59:20.732: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:59:20.732: INFO: update-demo-nautilus-sbgkp is verified up and running Apr 24 13:59:20.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vld5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:20.828: INFO: stderr: "" Apr 24 13:59:20.828: INFO: stdout: "true" Apr 24 13:59:20.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vld5n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:20.921: INFO: stderr: "" Apr 24 13:59:20.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:59:20.921: INFO: validating pod update-demo-nautilus-vld5n Apr 24 13:59:20.924: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:59:20.924: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:59:20.924: INFO: update-demo-nautilus-vld5n is verified up and running STEP: scaling down the replication controller Apr 24 13:59:20.926: INFO: scanned /root for discovery docs: Apr 24 13:59:20.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2590' Apr 24 13:59:22.050: INFO: stderr: "" Apr 24 13:59:22.050: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 13:59:22.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2590' Apr 24 13:59:22.162: INFO: stderr: "" Apr 24 13:59:22.162: INFO: stdout: "update-demo-nautilus-sbgkp update-demo-nautilus-vld5n " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 24 13:59:27.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2590' Apr 24 13:59:27.268: INFO: stderr: "" Apr 24 13:59:27.268: INFO: stdout: "update-demo-nautilus-vld5n " Apr 24 13:59:27.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vld5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:27.352: INFO: stderr: "" Apr 24 13:59:27.352: INFO: stdout: "true" Apr 24 13:59:27.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vld5n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:27.447: INFO: stderr: "" Apr 24 13:59:27.447: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:59:27.447: INFO: validating pod update-demo-nautilus-vld5n Apr 24 13:59:27.450: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:59:27.450: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:59:27.450: INFO: update-demo-nautilus-vld5n is verified up and running STEP: scaling up the replication controller Apr 24 13:59:27.452: INFO: scanned /root for discovery docs: Apr 24 13:59:27.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2590' Apr 24 13:59:28.565: INFO: stderr: "" Apr 24 13:59:28.565: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 13:59:28.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2590' Apr 24 13:59:28.670: INFO: stderr: "" Apr 24 13:59:28.670: INFO: stdout: "update-demo-nautilus-sd6jd update-demo-nautilus-vld5n " Apr 24 13:59:28.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd6jd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:28.752: INFO: stderr: "" Apr 24 13:59:28.752: INFO: stdout: "" Apr 24 13:59:28.752: INFO: update-demo-nautilus-sd6jd is created but not running Apr 24 13:59:33.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2590' Apr 24 13:59:33.853: INFO: stderr: "" Apr 24 13:59:33.853: INFO: stdout: "update-demo-nautilus-sd6jd update-demo-nautilus-vld5n " Apr 24 13:59:33.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd6jd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:33.965: INFO: stderr: "" Apr 24 13:59:33.965: INFO: stdout: "true" Apr 24 13:59:33.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd6jd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:34.050: INFO: stderr: "" Apr 24 13:59:34.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:59:34.050: INFO: validating pod update-demo-nautilus-sd6jd Apr 24 13:59:34.054: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:59:34.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:59:34.054: INFO: update-demo-nautilus-sd6jd is verified up and running Apr 24 13:59:34.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vld5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:34.142: INFO: stderr: "" Apr 24 13:59:34.142: INFO: stdout: "true" Apr 24 13:59:34.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vld5n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2590' Apr 24 13:59:34.259: INFO: stderr: "" Apr 24 13:59:34.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 13:59:34.259: INFO: validating pod update-demo-nautilus-vld5n Apr 24 13:59:34.263: INFO: got data: { "image": "nautilus.jpg" } Apr 24 13:59:34.263: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 13:59:34.263: INFO: update-demo-nautilus-vld5n is verified up and running STEP: using delete to clean up resources Apr 24 13:59:34.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2590' Apr 24 13:59:34.378: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 13:59:34.378: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 24 13:59:34.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2590' Apr 24 13:59:34.487: INFO: stderr: "No resources found.\n" Apr 24 13:59:34.487: INFO: stdout: "" Apr 24 13:59:34.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2590 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 13:59:34.604: INFO: stderr: "" Apr 24 13:59:34.604: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 13:59:34.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2590" for this suite. Apr 24 13:59:48.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 13:59:48.715: INFO: namespace kubectl-2590 deletion completed in 14.095481058s • [SLOW TEST:33.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 13:59:48.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:00:14.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-595" for this suite. Apr 24 14:00:20.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:00:21.073: INFO: namespace namespaces-595 deletion completed in 6.111355454s STEP: Destroying namespace "nsdeletetest-3933" for this suite. Apr 24 14:00:21.074: INFO: Namespace nsdeletetest-3933 was already deleted STEP: Destroying namespace "nsdeletetest-158" for this suite. Apr 24 14:00:27.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:00:27.179: INFO: namespace nsdeletetest-158 deletion completed in 6.104245956s • [SLOW TEST:38.464 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:00:27.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6218 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 14:00:27.227: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 14:00:55.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.81:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6218 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 14:00:55.338: INFO: >>> kubeConfig: /root/.kube/config I0424 14:00:55.367455 6 log.go:172] (0xc001a72420) (0xc002e1b7c0) Create stream I0424 14:00:55.367480 6 log.go:172] (0xc001a72420) (0xc002e1b7c0) Stream added, broadcasting: 1 I0424 14:00:55.369045 6 log.go:172] (0xc001a72420) Reply frame received for 1 I0424 14:00:55.369084 6 log.go:172] (0xc001a72420) (0xc00194c460) Create stream I0424 14:00:55.369101 6 log.go:172] (0xc001a72420) (0xc00194c460) Stream added, broadcasting: 3 I0424 14:00:55.370094 6 log.go:172] (0xc001a72420) Reply frame received for 3 I0424 14:00:55.370156 6 log.go:172] (0xc001a72420) (0xc002e1b900) Create stream I0424 14:00:55.370173 6 log.go:172] (0xc001a72420) (0xc002e1b900) Stream added, broadcasting: 5 I0424 14:00:55.371070 6 log.go:172] (0xc001a72420) Reply frame received for 5 I0424 14:00:55.448783 6 log.go:172] (0xc001a72420) Data frame received for 3 I0424 14:00:55.448808 6 log.go:172] (0xc00194c460) (3) Data frame handling I0424 14:00:55.448841 6 log.go:172] (0xc00194c460) (3) Data frame sent I0424 14:00:55.448862 6 log.go:172] (0xc001a72420) Data frame received for 3 I0424 14:00:55.448878 6 log.go:172] (0xc00194c460) (3) Data frame handling I0424 14:00:55.449106 6 log.go:172] (0xc001a72420) Data frame received for 5 I0424 14:00:55.449362 6 log.go:172] (0xc002e1b900) (5) Data frame handling I0424 14:00:55.451303 6 log.go:172] (0xc001a72420) Data frame received for 1 I0424 14:00:55.451321 6 log.go:172] (0xc002e1b7c0) (1) Data frame handling I0424 14:00:55.451338 6 log.go:172] (0xc002e1b7c0) (1) Data frame sent I0424 14:00:55.451349 6 log.go:172] (0xc001a72420) (0xc002e1b7c0) Stream removed, broadcasting: 1 I0424 14:00:55.451436 6 log.go:172] (0xc001a72420) (0xc002e1b7c0) Stream removed, broadcasting: 1 I0424 14:00:55.451459 6 log.go:172] (0xc001a72420) (0xc00194c460) Stream removed, broadcasting: 3 I0424 14:00:55.451469 6 log.go:172] (0xc001a72420) (0xc002e1b900) Stream removed, broadcasting: 5 Apr 24 14:00:55.451: INFO: Found all expected endpoints: [netserver-0] I0424 14:00:55.451530 6 log.go:172] (0xc001a72420) Go away received Apr 24 14:00:55.455: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.122:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6218 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 14:00:55.455: INFO: >>> kubeConfig: /root/.kube/config I0424 14:00:55.486118 6 log.go:172] (0xc001a73130) (0xc002e1bd60) Create stream I0424 14:00:55.486148 6 log.go:172] (0xc001a73130) (0xc002e1bd60) Stream added, broadcasting: 1 I0424 14:00:55.490342 6 log.go:172] (0xc001a73130) Reply frame received for 1 I0424 14:00:55.490407 6 log.go:172] (0xc001a73130) (0xc002e1be00) Create stream I0424 14:00:55.490424 6 log.go:172] (0xc001a73130) (0xc002e1be00) Stream added, broadcasting: 3 I0424 14:00:55.491925 6 log.go:172] (0xc001a73130) Reply frame received for 3 I0424 14:00:55.492020 6 log.go:172] (0xc001a73130) (0xc0021d1040) Create stream I0424 14:00:55.492072 6 log.go:172] (0xc001a73130) (0xc0021d1040) Stream added, broadcasting: 5 I0424 14:00:55.494127 6 log.go:172] (0xc001a73130) Reply frame received for 5 I0424 14:00:55.568938 6 log.go:172] (0xc001a73130) Data frame received for 3 I0424 14:00:55.568970 6 log.go:172] (0xc002e1be00) (3) Data frame handling I0424 14:00:55.569003 6 log.go:172] (0xc002e1be00) (3) Data frame sent I0424 14:00:55.569037 6 log.go:172] (0xc001a73130) Data frame received for 3 I0424 14:00:55.569059 6 log.go:172] (0xc002e1be00) (3) Data frame handling I0424 14:00:55.569237 6 log.go:172] (0xc001a73130) Data frame received for 5 I0424 14:00:55.569260 6 log.go:172] (0xc0021d1040) (5) Data frame handling I0424 14:00:55.571135 6 log.go:172] (0xc001a73130) Data frame received for 1 I0424 14:00:55.571153 6 log.go:172] (0xc002e1bd60) (1) Data frame handling I0424 14:00:55.571166 6 log.go:172] (0xc002e1bd60) (1) Data frame sent I0424 14:00:55.571262 6 log.go:172] (0xc001a73130) (0xc002e1bd60) Stream removed, broadcasting: 1 I0424 14:00:55.571374 6 log.go:172] (0xc001a73130) (0xc002e1bd60) Stream removed, broadcasting: 1 I0424 14:00:55.571406 6 log.go:172] (0xc001a73130) (0xc002e1be00) Stream removed, broadcasting: 3 I0424 14:00:55.571507 6 log.go:172] (0xc001a73130) Go away received I0424 14:00:55.571560 6 log.go:172] (0xc001a73130) (0xc0021d1040) Stream removed, broadcasting: 5 Apr 24 14:00:55.571: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:00:55.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6218" for this suite. Apr 24 14:01:19.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:01:19.684: INFO: namespace pod-network-test-6218 deletion completed in 24.108356067s • [SLOW TEST:52.505 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:01:19.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 24 14:01:19.762: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 24 14:01:24.774: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:01:25.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8969" for this suite. Apr 24 14:01:31.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:01:31.963: INFO: namespace replication-controller-8969 deletion completed in 6.171579314s • [SLOW TEST:12.277 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:01:31.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 24 14:01:32.101: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:01:40.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-775" for this suite. Apr 24 14:01:46.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:01:46.367: INFO: namespace init-container-775 deletion completed in 6.094632516s • [SLOW TEST:14.404 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:01:46.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 24 14:01:46.431: INFO: Waiting up to 5m0s for pod "pod-500f9cba-4aed-4c7d-838f-9097776f9195" in namespace "emptydir-2639" to be "success or failure" Apr 24 14:01:46.436: INFO: Pod "pod-500f9cba-4aed-4c7d-838f-9097776f9195": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722081ms Apr 24 14:01:48.440: INFO: Pod "pod-500f9cba-4aed-4c7d-838f-9097776f9195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008668408s Apr 24 14:01:50.443: INFO: Pod "pod-500f9cba-4aed-4c7d-838f-9097776f9195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012389007s STEP: Saw pod success Apr 24 14:01:50.444: INFO: Pod "pod-500f9cba-4aed-4c7d-838f-9097776f9195" satisfied condition "success or failure" Apr 24 14:01:50.447: INFO: Trying to get logs from node iruya-worker pod pod-500f9cba-4aed-4c7d-838f-9097776f9195 container test-container: STEP: delete the pod Apr 24 14:01:50.525: INFO: Waiting for pod pod-500f9cba-4aed-4c7d-838f-9097776f9195 to disappear Apr 24 14:01:50.531: INFO: Pod pod-500f9cba-4aed-4c7d-838f-9097776f9195 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:01:50.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2639" for this suite. Apr 24 14:01:56.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:01:56.628: INFO: namespace emptydir-2639 deletion completed in 6.093485588s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:01:56.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 24 14:01:56.674: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:02:01.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6173" for this suite. Apr 24 14:02:07.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:02:07.939: INFO: namespace init-container-6173 deletion completed in 6.106059037s • [SLOW TEST:11.311 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:02:07.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 24 14:02:08.024: INFO: Waiting up to 5m0s for pod "pod-fe56520b-ab92-481b-a5d8-a5098774c6a7" in namespace "emptydir-3360" to be "success or failure" Apr 24 14:02:08.043: INFO: Pod "pod-fe56520b-ab92-481b-a5d8-a5098774c6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.184531ms Apr 24 14:02:10.046: INFO: Pod "pod-fe56520b-ab92-481b-a5d8-a5098774c6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022855723s Apr 24 14:02:12.051: INFO: Pod "pod-fe56520b-ab92-481b-a5d8-a5098774c6a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02709232s STEP: Saw pod success Apr 24 14:02:12.051: INFO: Pod "pod-fe56520b-ab92-481b-a5d8-a5098774c6a7" satisfied condition "success or failure" Apr 24 14:02:12.054: INFO: Trying to get logs from node iruya-worker2 pod pod-fe56520b-ab92-481b-a5d8-a5098774c6a7 container test-container: STEP: delete the pod Apr 24 14:02:12.093: INFO: Waiting for pod pod-fe56520b-ab92-481b-a5d8-a5098774c6a7 to disappear Apr 24 14:02:12.100: INFO: Pod pod-fe56520b-ab92-481b-a5d8-a5098774c6a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:02:12.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3360" for this suite. Apr 24 14:02:18.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:02:18.201: INFO: namespace emptydir-3360 deletion completed in 6.095585715s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:02:18.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 14:02:18.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2436' Apr 24 14:02:18.369: INFO: stderr: "" Apr 24 14:02:18.369: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 24 14:02:18.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2436' Apr 24 14:02:31.862: INFO: stderr: "" Apr 24 14:02:31.862: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:02:31.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2436" for this suite. Apr 24 14:02:37.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:02:37.976: INFO: namespace kubectl-2436 deletion completed in 6.110907236s • [SLOW TEST:19.774 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:02:37.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 24 14:02:38.692: INFO: Pod name wrapped-volume-race-4704b5f0-abb4-4838-b216-94b7a4b9bcb1: Found 0 pods out of 5 Apr 24 14:02:43.700: INFO: Pod name wrapped-volume-race-4704b5f0-abb4-4838-b216-94b7a4b9bcb1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4704b5f0-abb4-4838-b216-94b7a4b9bcb1 in namespace emptydir-wrapper-4597, will wait for the garbage collector to delete the pods Apr 24 14:02:57.796: INFO: Deleting ReplicationController wrapped-volume-race-4704b5f0-abb4-4838-b216-94b7a4b9bcb1 took: 8.940689ms Apr 24 14:02:58.097: INFO: Terminating ReplicationController wrapped-volume-race-4704b5f0-abb4-4838-b216-94b7a4b9bcb1 pods took: 300.29861ms STEP: Creating RC which spawns configmap-volume pods Apr 24 14:03:42.668: INFO: Pod name wrapped-volume-race-d9b4f33b-422c-44a0-9ad5-464b52a21247: Found 0 pods out of 5 Apr 24 14:03:47.676: INFO: Pod name wrapped-volume-race-d9b4f33b-422c-44a0-9ad5-464b52a21247: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d9b4f33b-422c-44a0-9ad5-464b52a21247 in namespace emptydir-wrapper-4597, will wait for the garbage collector to delete the pods Apr 24 14:04:01.781: INFO: Deleting ReplicationController wrapped-volume-race-d9b4f33b-422c-44a0-9ad5-464b52a21247 took: 8.352285ms Apr 24 14:04:02.082: INFO: Terminating ReplicationController wrapped-volume-race-d9b4f33b-422c-44a0-9ad5-464b52a21247 pods took: 300.252642ms STEP: Creating RC which spawns configmap-volume pods Apr 24 14:04:42.340: INFO: Pod name wrapped-volume-race-fa500071-a446-46e1-824c-d4867fb44d9b: Found 0 pods out of 5 Apr 24 14:04:47.347: INFO: Pod name wrapped-volume-race-fa500071-a446-46e1-824c-d4867fb44d9b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fa500071-a446-46e1-824c-d4867fb44d9b in namespace emptydir-wrapper-4597, will wait for the garbage collector to delete the pods Apr 24 14:05:01.679: INFO: Deleting ReplicationController wrapped-volume-race-fa500071-a446-46e1-824c-d4867fb44d9b took: 7.51279ms Apr 24 14:05:02.080: INFO: Terminating ReplicationController wrapped-volume-race-fa500071-a446-46e1-824c-d4867fb44d9b pods took: 400.331097ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:05:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4597" for this suite. Apr 24 14:05:51.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:05:51.963: INFO: namespace emptydir-wrapper-4597 deletion completed in 8.125241656s • [SLOW TEST:193.987 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:05:51.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 24 14:05:52.001: INFO: Waiting up to 5m0s for pod "pod-f93f802c-f322-436c-92a6-f7f8c2309972" in namespace "emptydir-3092" to be "success or failure" Apr 24 14:05:52.019: INFO: Pod "pod-f93f802c-f322-436c-92a6-f7f8c2309972": Phase="Pending", Reason="", readiness=false. Elapsed: 18.157528ms Apr 24 14:05:54.029: INFO: Pod "pod-f93f802c-f322-436c-92a6-f7f8c2309972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028601062s Apr 24 14:05:56.034: INFO: Pod "pod-f93f802c-f322-436c-92a6-f7f8c2309972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033007844s STEP: Saw pod success Apr 24 14:05:56.034: INFO: Pod "pod-f93f802c-f322-436c-92a6-f7f8c2309972" satisfied condition "success or failure" Apr 24 14:05:56.038: INFO: Trying to get logs from node iruya-worker pod pod-f93f802c-f322-436c-92a6-f7f8c2309972 container test-container: STEP: delete the pod Apr 24 14:05:56.157: INFO: Waiting for pod pod-f93f802c-f322-436c-92a6-f7f8c2309972 to disappear Apr 24 14:05:56.173: INFO: Pod pod-f93f802c-f322-436c-92a6-f7f8c2309972 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:05:56.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3092" for this suite. Apr 24 14:06:02.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:06:02.279: INFO: namespace emptydir-3092 deletion completed in 6.103041962s • [SLOW TEST:10.315 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:06:02.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b48563c1-a2ec-4491-81a3-1548bb48037f STEP: Creating a pod to test consume secrets Apr 24 14:06:02.343: INFO: Waiting up to 5m0s for pod "pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d" in namespace "secrets-8186" to be "success or failure" Apr 24 14:06:02.353: INFO: Pod "pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.519846ms Apr 24 14:06:04.356: INFO: Pod "pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012856146s Apr 24 14:06:06.360: INFO: Pod "pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016947205s STEP: Saw pod success Apr 24 14:06:06.360: INFO: Pod "pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d" satisfied condition "success or failure" Apr 24 14:06:06.364: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d container secret-volume-test: STEP: delete the pod Apr 24 14:06:06.382: INFO: Waiting for pod pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d to disappear Apr 24 14:06:06.386: INFO: Pod pod-secrets-d2703ba3-4c3e-4177-a125-1fce64f0473d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:06:06.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8186" for this suite. Apr 24 14:06:12.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:06:12.472: INFO: namespace secrets-8186 deletion completed in 6.082841259s • [SLOW TEST:10.193 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:06:12.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 24 14:06:17.058: INFO: Successfully updated pod "pod-update-89e12572-f3de-458d-8bf9-6396cf185829" STEP: verifying the updated pod is in kubernetes Apr 24 14:06:17.106: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:06:17.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3595" for this suite. Apr 24 14:06:39.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:06:39.212: INFO: namespace pods-3595 deletion completed in 22.102753224s • [SLOW TEST:26.739 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:06:39.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-2239/secret-test-e5f6f2c4-f572-4eac-b37f-229521822577 STEP: Creating a pod to test consume secrets Apr 24 14:06:39.296: INFO: Waiting up to 5m0s for pod "pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d" in namespace "secrets-2239" to be "success or failure" Apr 24 14:06:39.312: INFO: Pod "pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.714909ms Apr 24 14:06:41.330: INFO: Pod "pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034159979s Apr 24 14:06:43.334: INFO: Pod "pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038423457s STEP: Saw pod success Apr 24 14:06:43.334: INFO: Pod "pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d" satisfied condition "success or failure" Apr 24 14:06:43.336: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d container env-test: STEP: delete the pod Apr 24 14:06:43.373: INFO: Waiting for pod pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d to disappear Apr 24 14:06:43.378: INFO: Pod pod-configmaps-577e3e38-a9f0-47f3-af15-89861b262d1d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:06:43.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2239" for this suite. Apr 24 14:06:49.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:06:49.483: INFO: namespace secrets-2239 deletion completed in 6.099346747s • [SLOW TEST:10.271 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:06:49.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 24 14:06:49.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1437' Apr 24 14:06:52.166: INFO: stderr: "" Apr 24 14:06:52.166: INFO: stdout: "pod/pause created\n" Apr 24 14:06:52.166: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 24 14:06:52.166: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1437" to be "running and ready" Apr 24 14:06:52.186: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.638711ms Apr 24 14:06:54.190: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023430999s Apr 24 14:06:56.194: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.02750907s Apr 24 14:06:56.194: INFO: Pod "pause" satisfied condition "running and ready" Apr 24 14:06:56.194: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 24 14:06:56.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1437' Apr 24 14:06:56.317: INFO: stderr: "" Apr 24 14:06:56.317: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 24 14:06:56.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1437' Apr 24 14:06:56.406: INFO: stderr: "" Apr 24 14:06:56.406: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 24 14:06:56.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1437' Apr 24 14:06:56.491: INFO: stderr: "" Apr 24 14:06:56.491: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 24 14:06:56.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1437' Apr 24 14:06:56.574: INFO: stderr: "" Apr 24 14:06:56.574: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 24 14:06:56.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1437' Apr 24 14:06:56.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 14:06:56.674: INFO: stdout: "pod \"pause\" force deleted\n" Apr 24 14:06:56.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1437' Apr 24 14:06:56.767: INFO: stderr: "No resources found.\n" Apr 24 14:06:56.767: INFO: stdout: "" Apr 24 14:06:56.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1437 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 14:06:56.861: INFO: stderr: "" Apr 24 14:06:56.861: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:06:56.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1437" for this suite. Apr 24 14:07:02.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:07:03.021: INFO: namespace kubectl-1437 deletion completed in 6.156645033s • [SLOW TEST:13.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:07:03.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 24 14:07:03.079: INFO: Waiting up to 5m0s for pod "var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa" in namespace "var-expansion-3067" to be "success or failure" Apr 24 14:07:03.089: INFO: Pod "var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262634ms Apr 24 14:07:05.093: INFO: Pod "var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014582545s Apr 24 14:07:07.098: INFO: Pod "var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018704568s STEP: Saw pod success Apr 24 14:07:07.098: INFO: Pod "var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa" satisfied condition "success or failure" Apr 24 14:07:07.101: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa container dapi-container: STEP: delete the pod Apr 24 14:07:07.120: INFO: Waiting for pod var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa to disappear Apr 24 14:07:07.125: INFO: Pod var-expansion-3a9587f1-5223-4963-b313-4af3922eb6aa no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:07:07.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3067" for this suite. Apr 24 14:07:13.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:07:13.235: INFO: namespace var-expansion-3067 deletion completed in 6.106251004s • [SLOW TEST:10.212 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:07:13.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 24 14:07:17.818: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2e5c1d1d-da67-4d37-91a2-d88101f9e687" Apr 24 14:07:17.819: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2e5c1d1d-da67-4d37-91a2-d88101f9e687" in namespace "pods-6430" to be "terminated due to deadline exceeded" Apr 24 14:07:17.869: INFO: Pod "pod-update-activedeadlineseconds-2e5c1d1d-da67-4d37-91a2-d88101f9e687": Phase="Running", Reason="", readiness=true. Elapsed: 50.7352ms Apr 24 14:07:19.874: INFO: Pod "pod-update-activedeadlineseconds-2e5c1d1d-da67-4d37-91a2-d88101f9e687": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.055727346s Apr 24 14:07:19.874: INFO: Pod "pod-update-activedeadlineseconds-2e5c1d1d-da67-4d37-91a2-d88101f9e687" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:07:19.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6430" for this suite. Apr 24 14:07:25.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:07:25.978: INFO: namespace pods-6430 deletion completed in 6.099234564s • [SLOW TEST:12.743 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:07:25.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 24 14:07:30.570: INFO: Successfully updated pod "annotationupdate0485341c-f026-44ea-aab0-62c9b329ccd4" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:07:32.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2637" for this suite. Apr 24 14:07:54.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:07:54.724: INFO: namespace projected-2637 deletion completed in 22.095169784s • [SLOW TEST:28.745 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:07:54.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 14:07:54.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b" in namespace "downward-api-8422" to be "success or failure" Apr 24 14:07:54.826: INFO: Pod "downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846469ms Apr 24 14:07:56.831: INFO: Pod "downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008094944s Apr 24 14:07:58.835: INFO: Pod "downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012590633s STEP: Saw pod success Apr 24 14:07:58.835: INFO: Pod "downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b" satisfied condition "success or failure" Apr 24 14:07:58.838: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b container client-container: STEP: delete the pod Apr 24 14:07:58.861: INFO: Waiting for pod downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b to disappear Apr 24 14:07:58.864: INFO: Pod downwardapi-volume-2afa9e7c-6e06-4709-87c8-10441f6ca32b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:07:58.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8422" for this suite. Apr 24 14:08:04.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:08:04.952: INFO: namespace downward-api-8422 deletion completed in 6.083324795s • [SLOW TEST:10.227 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:08:04.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:08:10.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9324" for this suite. Apr 24 14:08:16.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:08:16.695: INFO: namespace watch-9324 deletion completed in 6.186094107s • [SLOW TEST:11.743 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:08:16.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 24 14:08:16.776: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 14:08:16.782: INFO: Waiting for terminating namespaces to be deleted... Apr 24 14:08:16.784: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 24 14:08:16.791: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 24 14:08:16.791: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 14:08:16.791: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 24 14:08:16.791: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 14:08:16.791: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 24 14:08:16.796: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 24 14:08:16.796: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 14:08:16.796: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 24 14:08:16.796: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 14:08:16.796: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 24 14:08:16.796: INFO: Container coredns ready: true, restart count 0 Apr 24 14:08:16.796: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 24 14:08:16.796: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-960069d3-8d7b-41cc-92b6-69720196199e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-960069d3-8d7b-41cc-92b6-69720196199e off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-960069d3-8d7b-41cc-92b6-69720196199e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:08:24.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3974" for this suite. Apr 24 14:08:34.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:08:35.035: INFO: namespace sched-pred-3974 deletion completed in 10.083964035s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.339 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:08:35.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-85334c46-49df-47a3-aa2f-0f95c385744a STEP: Creating configMap with name cm-test-opt-upd-97bfdaf0-1453-4ea5-a3ee-53b2b87a3dde STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-85334c46-49df-47a3-aa2f-0f95c385744a STEP: Updating configmap cm-test-opt-upd-97bfdaf0-1453-4ea5-a3ee-53b2b87a3dde STEP: Creating configMap with name cm-test-opt-create-d6b606fa-c643-48ae-8b69-6f1f4ded04c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:08:43.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9275" for this suite. Apr 24 14:09:05.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:09:05.359: INFO: namespace projected-9275 deletion completed in 22.093202186s • [SLOW TEST:30.324 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:09:05.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 24 14:09:09.955: INFO: Successfully updated pod "labelsupdate66096a24-d7a0-4bce-be32-3bbf07ba25e9" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:09:11.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6365" for this suite. Apr 24 14:09:34.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:09:34.094: INFO: namespace projected-6365 deletion completed in 22.120772814s • [SLOW TEST:28.734 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:09:34.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 14:09:34.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 24 14:09:34.320: INFO: stderr: "" Apr 24 14:09:34.320: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:09:34.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1965" for this suite. Apr 24 14:09:40.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:09:40.417: INFO: namespace kubectl-1965 deletion completed in 6.092834559s • [SLOW TEST:6.323 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:09:40.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 24 14:09:40.503: INFO: Waiting up to 5m0s for pod "pod-4cbc8770-61e7-482c-be99-2760c531284d" in namespace "emptydir-6100" to be "success or failure" Apr 24 14:09:40.506: INFO: Pod "pod-4cbc8770-61e7-482c-be99-2760c531284d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.393179ms Apr 24 14:09:42.992: INFO: Pod "pod-4cbc8770-61e7-482c-be99-2760c531284d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488882648s Apr 24 14:09:44.996: INFO: Pod "pod-4cbc8770-61e7-482c-be99-2760c531284d": Phase="Running", Reason="", readiness=true. Elapsed: 4.493363878s Apr 24 14:09:47.000: INFO: Pod "pod-4cbc8770-61e7-482c-be99-2760c531284d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.49766646s STEP: Saw pod success Apr 24 14:09:47.000: INFO: Pod "pod-4cbc8770-61e7-482c-be99-2760c531284d" satisfied condition "success or failure" Apr 24 14:09:47.003: INFO: Trying to get logs from node iruya-worker pod pod-4cbc8770-61e7-482c-be99-2760c531284d container test-container: STEP: delete the pod Apr 24 14:09:47.081: INFO: Waiting for pod pod-4cbc8770-61e7-482c-be99-2760c531284d to disappear Apr 24 14:09:47.086: INFO: Pod pod-4cbc8770-61e7-482c-be99-2760c531284d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:09:47.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6100" for this suite. Apr 24 14:09:53.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:09:53.212: INFO: namespace emptydir-6100 deletion completed in 6.122475102s • [SLOW TEST:12.794 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:09:53.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 24 14:09:59.940: INFO: 0 pods remaining Apr 24 14:09:59.940: INFO: 0 pods has nil DeletionTimestamp Apr 24 14:09:59.940: INFO: STEP: Gathering metrics W0424 14:10:00.670058 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 14:10:00.670: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:10:00.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9492" for this suite. Apr 24 14:10:06.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:10:06.834: INFO: namespace gc-9492 deletion completed in 6.160842972s • [SLOW TEST:13.622 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:10:06.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 14:10:06.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96" in namespace "downward-api-9631" to be "success or failure" Apr 24 14:10:06.889: INFO: Pod "downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96": Phase="Pending", Reason="", readiness=false. Elapsed: 15.005974ms Apr 24 14:10:08.893: INFO: Pod "downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019642124s Apr 24 14:10:10.897: INFO: Pod "downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023624507s STEP: Saw pod success Apr 24 14:10:10.897: INFO: Pod "downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96" satisfied condition "success or failure" Apr 24 14:10:10.900: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96 container client-container: STEP: delete the pod Apr 24 14:10:10.932: INFO: Waiting for pod downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96 to disappear Apr 24 14:10:10.937: INFO: Pod downwardapi-volume-42f60c8b-80cb-43cb-a73f-c854d3527f96 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:10:10.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9631" for this suite. Apr 24 14:10:16.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:10:17.032: INFO: namespace downward-api-9631 deletion completed in 6.091373619s • [SLOW TEST:10.197 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:10:17.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 14:10:17.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0" in namespace "projected-5405" to be "success or failure" Apr 24 14:10:17.129: INFO: Pod "downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.701771ms Apr 24 14:10:19.134: INFO: Pod "downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008114365s Apr 24 14:10:21.138: INFO: Pod "downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012571213s STEP: Saw pod success Apr 24 14:10:21.138: INFO: Pod "downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0" satisfied condition "success or failure" Apr 24 14:10:21.141: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0 container client-container: STEP: delete the pod Apr 24 14:10:21.162: INFO: Waiting for pod downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0 to disappear Apr 24 14:10:21.171: INFO: Pod downwardapi-volume-22f46e54-678d-4266-8670-5177900a66b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:10:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5405" for this suite. Apr 24 14:10:27.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:10:27.280: INFO: namespace projected-5405 deletion completed in 6.105649707s • [SLOW TEST:10.247 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:10:27.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-4a1a3d4a-df67-4e05-a3ab-2921057670df STEP: Creating a pod to test consume configMaps Apr 24 14:10:27.333: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc" in namespace "projected-3285" to be "success or failure" Apr 24 14:10:27.345: INFO: Pod "pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.797638ms Apr 24 14:10:29.349: INFO: Pod "pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01624657s Apr 24 14:10:31.353: INFO: Pod "pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020583307s STEP: Saw pod success Apr 24 14:10:31.353: INFO: Pod "pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc" satisfied condition "success or failure" Apr 24 14:10:31.356: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc container projected-configmap-volume-test: STEP: delete the pod Apr 24 14:10:31.391: INFO: Waiting for pod pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc to disappear Apr 24 14:10:31.412: INFO: Pod pod-projected-configmaps-c5b589cc-aaf1-47f9-adba-c6b195f50ffc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:10:31.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3285" for this suite. Apr 24 14:10:37.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:10:37.494: INFO: namespace projected-3285 deletion completed in 6.079168317s • [SLOW TEST:10.213 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:10:37.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 24 14:10:37.588: INFO: Waiting up to 5m0s for pod "pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f" in namespace "emptydir-1629" to be "success or failure" Apr 24 14:10:37.609: INFO: Pod "pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.736466ms Apr 24 14:10:39.614: INFO: Pod "pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026000139s Apr 24 14:10:41.618: INFO: Pod "pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030483592s STEP: Saw pod success Apr 24 14:10:41.618: INFO: Pod "pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f" satisfied condition "success or failure" Apr 24 14:10:41.621: INFO: Trying to get logs from node iruya-worker pod pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f container test-container: STEP: delete the pod Apr 24 14:10:41.651: INFO: Waiting for pod pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f to disappear Apr 24 14:10:41.657: INFO: Pod pod-a28b96bf-a7fd-4ee3-b6d1-beaf8a8dd08f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:10:41.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1629" for this suite. Apr 24 14:10:47.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:10:47.749: INFO: namespace emptydir-1629 deletion completed in 6.089353518s • [SLOW TEST:10.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:10:47.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 24 14:10:47.846: INFO: Waiting up to 5m0s for pod "pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc" in namespace "emptydir-7314" to be "success or failure" Apr 24 14:10:47.854: INFO: Pod "pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429391ms Apr 24 14:10:49.859: INFO: Pod "pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012788529s Apr 24 14:10:51.863: INFO: Pod "pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017040172s STEP: Saw pod success Apr 24 14:10:51.863: INFO: Pod "pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc" satisfied condition "success or failure" Apr 24 14:10:51.865: INFO: Trying to get logs from node iruya-worker2 pod pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc container test-container: STEP: delete the pod Apr 24 14:10:51.903: INFO: Waiting for pod pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc to disappear Apr 24 14:10:51.920: INFO: Pod pod-b29e50a7-864a-4f20-8d62-bf36c9b71bdc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:10:51.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7314" for this suite. Apr 24 14:10:57.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:10:58.020: INFO: namespace emptydir-7314 deletion completed in 6.096557457s • [SLOW TEST:10.271 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:10:58.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 24 14:10:58.084: INFO: Waiting up to 5m0s for pod "client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4" in namespace "containers-4569" to be "success or failure" Apr 24 14:10:58.103: INFO: Pod "client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.555705ms Apr 24 14:11:00.107: INFO: Pod "client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023475595s Apr 24 14:11:02.112: INFO: Pod "client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027990997s STEP: Saw pod success Apr 24 14:11:02.112: INFO: Pod "client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4" satisfied condition "success or failure" Apr 24 14:11:02.115: INFO: Trying to get logs from node iruya-worker pod client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4 container test-container: STEP: delete the pod Apr 24 14:11:02.137: INFO: Waiting for pod client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4 to disappear Apr 24 14:11:02.184: INFO: Pod client-containers-b681545a-ebb5-4e0d-b8f5-eedd36ed50a4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:11:02.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4569" for this suite. Apr 24 14:11:08.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:11:08.290: INFO: namespace containers-4569 deletion completed in 6.101969952s • [SLOW TEST:10.269 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:11:08.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 14:11:12.396: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:11:12.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4245" for this suite. Apr 24 14:11:18.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:11:18.543: INFO: namespace container-runtime-4245 deletion completed in 6.096848271s • [SLOW TEST:10.252 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:11:18.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 24 14:11:18.625: INFO: Waiting up to 5m0s for pod "var-expansion-477503a5-3db1-4484-81c9-0c09c871f051" in namespace "var-expansion-1788" to be "success or failure" Apr 24 14:11:18.628: INFO: Pod "var-expansion-477503a5-3db1-4484-81c9-0c09c871f051": Phase="Pending", Reason="", readiness=false. Elapsed: 3.43412ms Apr 24 14:11:21.076: INFO: Pod "var-expansion-477503a5-3db1-4484-81c9-0c09c871f051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451594855s Apr 24 14:11:23.155: INFO: Pod "var-expansion-477503a5-3db1-4484-81c9-0c09c871f051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.529658213s STEP: Saw pod success Apr 24 14:11:23.155: INFO: Pod "var-expansion-477503a5-3db1-4484-81c9-0c09c871f051" satisfied condition "success or failure" Apr 24 14:11:23.158: INFO: Trying to get logs from node iruya-worker pod var-expansion-477503a5-3db1-4484-81c9-0c09c871f051 container dapi-container: STEP: delete the pod Apr 24 14:11:23.409: INFO: Waiting for pod var-expansion-477503a5-3db1-4484-81c9-0c09c871f051 to disappear Apr 24 14:11:23.436: INFO: Pod var-expansion-477503a5-3db1-4484-81c9-0c09c871f051 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:11:23.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1788" for this suite. Apr 24 14:11:29.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:11:29.539: INFO: namespace var-expansion-1788 deletion completed in 6.099343118s • [SLOW TEST:10.996 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:11:29.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 24 14:11:29.665: INFO: Waiting up to 5m0s for pod "client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8" in namespace "containers-3048" to be "success or failure" Apr 24 14:11:29.670: INFO: Pod "client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.15406ms Apr 24 14:11:31.673: INFO: Pod "client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008178259s Apr 24 14:11:33.678: INFO: Pod "client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012567041s STEP: Saw pod success Apr 24 14:11:33.678: INFO: Pod "client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8" satisfied condition "success or failure" Apr 24 14:11:33.681: INFO: Trying to get logs from node iruya-worker2 pod client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8 container test-container: STEP: delete the pod Apr 24 14:11:33.702: INFO: Waiting for pod client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8 to disappear Apr 24 14:11:33.706: INFO: Pod client-containers-8c3eb56c-d0a8-4a4f-914e-d4d3c80f33b8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:11:33.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3048" for this suite. Apr 24 14:11:39.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:11:39.802: INFO: namespace containers-3048 deletion completed in 6.091870954s • [SLOW TEST:10.262 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:11:39.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 14:11:39.891: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:11:44.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5229" for this suite. Apr 24 14:12:34.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:12:34.139: INFO: namespace pods-5229 deletion completed in 50.09147396s • [SLOW TEST:54.337 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:12:34.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 24 14:12:34.236: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8596,SelfLink:/api/v1/namespaces/watch-8596/configmaps/e2e-watch-test-label-changed,UID:f9ce8843-256d-4b89-ab7e-7a11418f768e,ResourceVersion:7192547,Generation:0,CreationTimestamp:2020-04-24 14:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 14:12:34.236: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8596,SelfLink:/api/v1/namespaces/watch-8596/configmaps/e2e-watch-test-label-changed,UID:f9ce8843-256d-4b89-ab7e-7a11418f768e,ResourceVersion:7192548,Generation:0,CreationTimestamp:2020-04-24 14:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 24 14:12:34.237: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8596,SelfLink:/api/v1/namespaces/watch-8596/configmaps/e2e-watch-test-label-changed,UID:f9ce8843-256d-4b89-ab7e-7a11418f768e,ResourceVersion:7192549,Generation:0,CreationTimestamp:2020-04-24 14:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 24 14:12:44.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8596,SelfLink:/api/v1/namespaces/watch-8596/configmaps/e2e-watch-test-label-changed,UID:f9ce8843-256d-4b89-ab7e-7a11418f768e,ResourceVersion:7192570,Generation:0,CreationTimestamp:2020-04-24 14:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 14:12:44.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8596,SelfLink:/api/v1/namespaces/watch-8596/configmaps/e2e-watch-test-label-changed,UID:f9ce8843-256d-4b89-ab7e-7a11418f768e,ResourceVersion:7192571,Generation:0,CreationTimestamp:2020-04-24 14:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 24 14:12:44.301: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8596,SelfLink:/api/v1/namespaces/watch-8596/configmaps/e2e-watch-test-label-changed,UID:f9ce8843-256d-4b89-ab7e-7a11418f768e,ResourceVersion:7192572,Generation:0,CreationTimestamp:2020-04-24 14:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:12:44.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8596" for this suite. Apr 24 14:12:50.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:12:50.413: INFO: namespace watch-8596 deletion completed in 6.105893976s • [SLOW TEST:16.274 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:12:50.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 24 14:12:58.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:12:58.546: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:00.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:00.551: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:02.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:02.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:04.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:04.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:06.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:06.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:08.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:08.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:10.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:10.551: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:12.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:12.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:14.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:14.551: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:16.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:16.551: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:18.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:18.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:20.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:20.550: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 14:13:22.546: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 14:13:22.550: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:13:22.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-654" for this suite. Apr 24 14:13:44.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:13:44.640: INFO: namespace container-lifecycle-hook-654 deletion completed in 22.084172898s • [SLOW TEST:54.227 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:13:44.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-1a1899c9-b4ff-47df-ac5e-6408f93c9267 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:13:44.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8303" for this suite. Apr 24 14:13:50.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:13:50.817: INFO: namespace configmap-8303 deletion completed in 6.101208529s • [SLOW TEST:6.177 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:13:50.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 24 14:13:54.934: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 24 14:14:05.040: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:14:05.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8063" for this suite. Apr 24 14:14:11.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:14:11.133: INFO: namespace pods-8063 deletion completed in 6.087274973s • [SLOW TEST:20.316 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:14:11.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 24 14:14:11.202: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7199,SelfLink:/api/v1/namespaces/watch-7199/configmaps/e2e-watch-test-watch-closed,UID:dc9efdb7-8d84-4e16-b01b-0774a8272af5,ResourceVersion:7192826,Generation:0,CreationTimestamp:2020-04-24 14:14:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 14:14:11.202: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7199,SelfLink:/api/v1/namespaces/watch-7199/configmaps/e2e-watch-test-watch-closed,UID:dc9efdb7-8d84-4e16-b01b-0774a8272af5,ResourceVersion:7192827,Generation:0,CreationTimestamp:2020-04-24 14:14:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 24 14:14:11.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7199,SelfLink:/api/v1/namespaces/watch-7199/configmaps/e2e-watch-test-watch-closed,UID:dc9efdb7-8d84-4e16-b01b-0774a8272af5,ResourceVersion:7192828,Generation:0,CreationTimestamp:2020-04-24 14:14:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 14:14:11.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7199,SelfLink:/api/v1/namespaces/watch-7199/configmaps/e2e-watch-test-watch-closed,UID:dc9efdb7-8d84-4e16-b01b-0774a8272af5,ResourceVersion:7192829,Generation:0,CreationTimestamp:2020-04-24 14:14:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:14:11.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7199" for this suite. Apr 24 14:14:17.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:14:17.327: INFO: namespace watch-7199 deletion completed in 6.091489137s • [SLOW TEST:6.193 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:14:17.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 24 14:14:17.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928" in namespace "downward-api-9555" to be "success or failure" Apr 24 14:14:17.434: INFO: Pod "downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359326ms Apr 24 14:14:19.438: INFO: Pod "downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008564517s Apr 24 14:14:21.443: INFO: Pod "downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013072804s STEP: Saw pod success Apr 24 14:14:21.443: INFO: Pod "downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928" satisfied condition "success or failure" Apr 24 14:14:21.446: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928 container client-container: STEP: delete the pod Apr 24 14:14:21.477: INFO: Waiting for pod downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928 to disappear Apr 24 14:14:21.510: INFO: Pod downwardapi-volume-d5e742b2-0f41-40b3-ae12-aca08033c928 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:14:21.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9555" for this suite. Apr 24 14:14:27.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:14:27.602: INFO: namespace downward-api-9555 deletion completed in 6.088241304s • [SLOW TEST:10.275 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:14:27.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 24 14:14:27.686: INFO: Waiting up to 5m0s for pod "pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad" in namespace "emptydir-8113" to be "success or failure" Apr 24 14:14:27.692: INFO: Pod "pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad": Phase="Pending", Reason="", readiness=false. Elapsed: 5.976555ms Apr 24 14:14:29.696: INFO: Pod "pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010281961s Apr 24 14:14:31.701: INFO: Pod "pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015201399s STEP: Saw pod success Apr 24 14:14:31.701: INFO: Pod "pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad" satisfied condition "success or failure" Apr 24 14:14:31.705: INFO: Trying to get logs from node iruya-worker2 pod pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad container test-container: STEP: delete the pod Apr 24 14:14:31.723: INFO: Waiting for pod pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad to disappear Apr 24 14:14:31.741: INFO: Pod pod-a4aad5c3-96d9-425a-92b4-58b0f33172ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:14:31.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8113" for this suite. Apr 24 14:14:37.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:14:37.836: INFO: namespace emptydir-8113 deletion completed in 6.091864951s • [SLOW TEST:10.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:14:37.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 24 14:14:37.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3688' Apr 24 14:14:38.218: INFO: stderr: "" Apr 24 14:14:38.218: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 24 14:14:39.227: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:14:39.227: INFO: Found 0 / 1 Apr 24 14:14:40.222: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:14:40.222: INFO: Found 0 / 1 Apr 24 14:14:41.221: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:14:41.221: INFO: Found 1 / 1 Apr 24 14:14:41.221: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 24 14:14:41.223: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:14:41.223: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 24 14:14:41.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-2n6td --namespace=kubectl-3688 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 24 14:14:41.320: INFO: stderr: "" Apr 24 14:14:41.320: INFO: stdout: "pod/redis-master-2n6td patched\n" STEP: checking annotations Apr 24 14:14:41.323: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:14:41.323: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:14:41.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3688" for this suite. Apr 24 14:15:03.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:15:03.423: INFO: namespace kubectl-3688 deletion completed in 22.097426472s • [SLOW TEST:25.586 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:15:03.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6868edfc-df46-43c6-97ad-a7c543c2da5c STEP: Creating a pod to test consume secrets Apr 24 14:15:03.503: INFO: Waiting up to 5m0s for pod "pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa" in namespace "secrets-9927" to be "success or failure" Apr 24 14:15:03.507: INFO: Pod "pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.779528ms Apr 24 14:15:05.511: INFO: Pod "pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008452345s Apr 24 14:15:07.515: INFO: Pod "pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012574796s STEP: Saw pod success Apr 24 14:15:07.516: INFO: Pod "pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa" satisfied condition "success or failure" Apr 24 14:15:07.518: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa container secret-volume-test: STEP: delete the pod Apr 24 14:15:07.538: INFO: Waiting for pod pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa to disappear Apr 24 14:15:07.582: INFO: Pod pod-secrets-d9c50b06-6669-4212-9338-5b6324ae53aa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:15:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9927" for this suite. Apr 24 14:15:13.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:15:13.708: INFO: namespace secrets-9927 deletion completed in 6.12206398s • [SLOW TEST:10.285 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:15:13.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 24 14:15:13.760: INFO: Waiting up to 5m0s for pod "pod-66ef8f1f-f843-41c2-80d7-796c94f5824f" in namespace "emptydir-3350" to be "success or failure" Apr 24 14:15:13.764: INFO: Pod "pod-66ef8f1f-f843-41c2-80d7-796c94f5824f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835975ms Apr 24 14:15:16.690: INFO: Pod "pod-66ef8f1f-f843-41c2-80d7-796c94f5824f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930200225s Apr 24 14:15:18.695: INFO: Pod "pod-66ef8f1f-f843-41c2-80d7-796c94f5824f": Phase="Running", Reason="", readiness=true. Elapsed: 4.934716173s Apr 24 14:15:20.699: INFO: Pod "pod-66ef8f1f-f843-41c2-80d7-796c94f5824f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.9390687s STEP: Saw pod success Apr 24 14:15:20.699: INFO: Pod "pod-66ef8f1f-f843-41c2-80d7-796c94f5824f" satisfied condition "success or failure" Apr 24 14:15:20.702: INFO: Trying to get logs from node iruya-worker pod pod-66ef8f1f-f843-41c2-80d7-796c94f5824f container test-container: STEP: delete the pod Apr 24 14:15:20.736: INFO: Waiting for pod pod-66ef8f1f-f843-41c2-80d7-796c94f5824f to disappear Apr 24 14:15:20.745: INFO: Pod pod-66ef8f1f-f843-41c2-80d7-796c94f5824f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:15:20.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3350" for this suite. Apr 24 14:15:26.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:15:26.845: INFO: namespace emptydir-3350 deletion completed in 6.097385092s • [SLOW TEST:13.137 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:15:26.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-03c5ce50-55ca-4795-900e-e126fb129401 STEP: Creating secret with name s-test-opt-upd-571a451b-8c55-46e7-89f6-27c6d9c93809 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-03c5ce50-55ca-4795-900e-e126fb129401 STEP: Updating secret s-test-opt-upd-571a451b-8c55-46e7-89f6-27c6d9c93809 STEP: Creating secret with name s-test-opt-create-b4c992a1-805b-4a27-98cf-4cc78d04b8f3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:15:35.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-93" for this suite. Apr 24 14:15:57.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:15:57.238: INFO: namespace secrets-93 deletion completed in 22.136886742s • [SLOW TEST:30.392 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:15:57.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-55hc STEP: Creating a pod to test atomic-volume-subpath Apr 24 14:15:57.332: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-55hc" in namespace "subpath-7649" to be "success or failure" Apr 24 14:15:57.350: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.445663ms Apr 24 14:15:59.354: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022541548s Apr 24 14:16:01.358: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 4.026290434s Apr 24 14:16:03.386: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 6.053939446s Apr 24 14:16:05.390: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 8.058546778s Apr 24 14:16:07.395: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 10.062923064s Apr 24 14:16:09.399: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 12.067300989s Apr 24 14:16:11.403: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 14.070810168s Apr 24 14:16:13.406: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 16.074367145s Apr 24 14:16:15.411: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 18.078886273s Apr 24 14:16:17.415: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 20.083240854s Apr 24 14:16:19.419: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Running", Reason="", readiness=true. Elapsed: 22.087086708s Apr 24 14:16:21.423: INFO: Pod "pod-subpath-test-configmap-55hc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.091345239s STEP: Saw pod success Apr 24 14:16:21.423: INFO: Pod "pod-subpath-test-configmap-55hc" satisfied condition "success or failure" Apr 24 14:16:21.426: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-55hc container test-container-subpath-configmap-55hc: STEP: delete the pod Apr 24 14:16:21.465: INFO: Waiting for pod pod-subpath-test-configmap-55hc to disappear Apr 24 14:16:21.476: INFO: Pod pod-subpath-test-configmap-55hc no longer exists STEP: Deleting pod pod-subpath-test-configmap-55hc Apr 24 14:16:21.476: INFO: Deleting pod "pod-subpath-test-configmap-55hc" in namespace "subpath-7649" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:16:21.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7649" for this suite. Apr 24 14:16:27.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:16:27.587: INFO: namespace subpath-7649 deletion completed in 6.084193586s • [SLOW TEST:30.349 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:16:27.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0424 14:17:07.872092 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 14:17:07.872: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:17:07.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5192" for this suite. Apr 24 14:17:17.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:17:18.003: INFO: namespace gc-5192 deletion completed in 10.128267214s • [SLOW TEST:50.416 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:17:18.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 14:17:18.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6743' Apr 24 14:17:20.449: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 14:17:20.449: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 24 14:17:22.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6743' Apr 24 14:17:22.662: INFO: stderr: "" Apr 24 14:17:22.662: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:17:22.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6743" for this suite. Apr 24 14:18:44.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:18:44.755: INFO: namespace kubectl-6743 deletion completed in 1m22.090464405s • [SLOW TEST:86.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:18:44.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 14:18:44.886: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f49c9c80-c255-4232-8726-c24d9c4f17bd", Controller:(*bool)(0xc002a05102), BlockOwnerDeletion:(*bool)(0xc002a05103)}} Apr 24 14:18:44.969: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c00acd8a-a517-4a95-ae97-0d4bc29f6c6a", Controller:(*bool)(0xc002a052aa), BlockOwnerDeletion:(*bool)(0xc002a052ab)}} Apr 24 14:18:44.999: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ec4f8311-85be-4770-8924-c2be0d14bcb8", Controller:(*bool)(0xc002ceea22), BlockOwnerDeletion:(*bool)(0xc002ceea23)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:18:50.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9542" for this suite. Apr 24 14:18:56.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:18:56.129: INFO: namespace gc-9542 deletion completed in 6.090283008s • [SLOW TEST:11.373 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:18:56.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3211 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3211 STEP: Creating statefulset with conflicting port in namespace statefulset-3211 STEP: Waiting until pod test-pod will start running in namespace statefulset-3211 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3211 Apr 24 14:19:00.678: INFO: Observed stateful pod in namespace: statefulset-3211, name: ss-0, uid: eddd282e-3c63-4420-a916-a5efd46ef677, status phase: Pending. Waiting for statefulset controller to delete. Apr 24 14:19:02.149: INFO: Observed stateful pod in namespace: statefulset-3211, name: ss-0, uid: eddd282e-3c63-4420-a916-a5efd46ef677, status phase: Failed. Waiting for statefulset controller to delete. Apr 24 14:19:02.155: INFO: Observed stateful pod in namespace: statefulset-3211, name: ss-0, uid: eddd282e-3c63-4420-a916-a5efd46ef677, status phase: Failed. Waiting for statefulset controller to delete. Apr 24 14:19:02.209: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3211 STEP: Removing pod with conflicting port in namespace statefulset-3211 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3211 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 24 14:19:06.264: INFO: Deleting all statefulset in ns statefulset-3211 Apr 24 14:19:06.268: INFO: Scaling statefulset ss to 0 Apr 24 14:19:16.286: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 14:19:16.289: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:19:16.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3211" for this suite. Apr 24 14:19:22.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:19:22.399: INFO: namespace statefulset-3211 deletion completed in 6.094350928s • [SLOW TEST:26.270 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:19:22.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 24 14:19:22.464: INFO: Waiting up to 5m0s for pod "downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f" in namespace "downward-api-6132" to be "success or failure" Apr 24 14:19:22.467: INFO: Pod "downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.313258ms Apr 24 14:19:24.484: INFO: Pod "downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020127601s Apr 24 14:19:26.489: INFO: Pod "downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02445337s STEP: Saw pod success Apr 24 14:19:26.489: INFO: Pod "downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f" satisfied condition "success or failure" Apr 24 14:19:26.492: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f container dapi-container: STEP: delete the pod Apr 24 14:19:26.536: INFO: Waiting for pod downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f to disappear Apr 24 14:19:26.548: INFO: Pod downward-api-a73b3497-9070-4bfa-8228-e70d6ad07e5f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:19:26.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6132" for this suite. Apr 24 14:19:32.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:19:32.644: INFO: namespace downward-api-6132 deletion completed in 6.091215567s • [SLOW TEST:10.245 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:19:32.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fe76d393-575b-4ee8-9b33-5886fb34fb71 STEP: Creating a pod to test consume configMaps Apr 24 14:19:32.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403" in namespace "configmap-3873" to be "success or failure" Apr 24 14:19:32.722: INFO: Pod "pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403": Phase="Pending", Reason="", readiness=false. Elapsed: 3.943976ms Apr 24 14:19:34.727: INFO: Pod "pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008362296s Apr 24 14:19:36.731: INFO: Pod "pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012753136s STEP: Saw pod success Apr 24 14:19:36.731: INFO: Pod "pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403" satisfied condition "success or failure" Apr 24 14:19:36.735: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403 container configmap-volume-test: STEP: delete the pod Apr 24 14:19:36.760: INFO: Waiting for pod pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403 to disappear Apr 24 14:19:36.765: INFO: Pod pod-configmaps-8f1ea7d2-3345-4a44-8d69-1a88b5ef1403 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:19:36.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3873" for this suite. Apr 24 14:19:42.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:19:42.882: INFO: namespace configmap-3873 deletion completed in 6.11367554s • [SLOW TEST:10.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:19:42.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 24 14:19:42.928: INFO: namespace kubectl-1254 Apr 24 14:19:42.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1254' Apr 24 14:19:43.196: INFO: stderr: "" Apr 24 14:19:43.196: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 24 14:19:44.201: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:19:44.201: INFO: Found 0 / 1 Apr 24 14:19:45.201: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:19:45.201: INFO: Found 0 / 1 Apr 24 14:19:46.201: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:19:46.201: INFO: Found 0 / 1 Apr 24 14:19:47.201: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:19:47.201: INFO: Found 1 / 1 Apr 24 14:19:47.201: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 24 14:19:47.205: INFO: Selector matched 1 pods for map[app:redis] Apr 24 14:19:47.205: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 24 14:19:47.205: INFO: wait on redis-master startup in kubectl-1254 Apr 24 14:19:47.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4g7js redis-master --namespace=kubectl-1254' Apr 24 14:19:47.315: INFO: stderr: "" Apr 24 14:19:47.315: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Apr 14:19:45.598 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Apr 14:19:45.598 # Server started, Redis version 3.2.12\n1:M 24 Apr 14:19:45.598 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Apr 14:19:45.598 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 24 14:19:47.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1254' Apr 24 14:19:47.456: INFO: stderr: "" Apr 24 14:19:47.456: INFO: stdout: "service/rm2 exposed\n" Apr 24 14:19:47.468: INFO: Service rm2 in namespace kubectl-1254 found. STEP: exposing service Apr 24 14:19:49.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1254' Apr 24 14:19:49.630: INFO: stderr: "" Apr 24 14:19:49.630: INFO: stdout: "service/rm3 exposed\n" Apr 24 14:19:49.643: INFO: Service rm3 in namespace kubectl-1254 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:19:51.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1254" for this suite. Apr 24 14:20:13.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:20:13.745: INFO: namespace kubectl-1254 deletion completed in 22.092208765s • [SLOW TEST:30.863 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:20:13.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 14:20:13.796: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:20:17.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6121" for this suite. Apr 24 14:21:07.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:21:07.952: INFO: namespace pods-6121 deletion completed in 50.106601338s • [SLOW TEST:54.207 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:21:07.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 14:21:08.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8826' Apr 24 14:21:08.129: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 14:21:08.130: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 24 14:21:10.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8826' Apr 24 14:21:10.294: INFO: stderr: "" Apr 24 14:21:10.294: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:21:10.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8826" for this suite. Apr 24 14:23:12.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:23:12.420: INFO: namespace kubectl-8826 deletion completed in 2m2.107835639s • [SLOW TEST:124.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:23:12.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 24 14:23:12.479: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 14:23:12.487: INFO: Waiting for terminating namespaces to be deleted... Apr 24 14:23:12.489: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 24 14:23:12.494: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 24 14:23:12.494: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 14:23:12.494: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 24 14:23:12.494: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 14:23:12.494: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 24 14:23:12.500: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 24 14:23:12.500: INFO: Container coredns ready: true, restart count 0 Apr 24 14:23:12.500: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 24 14:23:12.500: INFO: Container coredns ready: true, restart count 0 Apr 24 14:23:12.500: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 24 14:23:12.500: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 14:23:12.500: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 24 14:23:12.500: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 24 14:23:12.584: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 24 14:23:12.584: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 24 14:23:12.584: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 24 14:23:12.584: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 24 14:23:12.584: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 24 14:23:12.584: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f.1608c77957241bd8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3733/filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f.1608c779a923e251], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f.1608c779eb453232], Reason = [Created], Message = [Created container filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f] STEP: Considering event: Type = [Normal], Name = [filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f.1608c77a006a7604], Reason = [Started], Message = [Started container filler-pod-9aab1a63-2914-42bc-8037-56bc8fb1c92f] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490.1608c77957241be1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3733/filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490.1608c779c3faf60c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490.1608c779fb063c89], Reason = [Created], Message = [Created container filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490.1608c77a0a4d90d8], Reason = [Started], Message = [Started container filler-pod-f4a9ec65-c3b0-43a8-89fb-391cb2ba1490] STEP: Considering event: Type = [Warning], Name = [additional-pod.1608c77a46963370], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:23:17.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3733" for this suite. Apr 24 14:23:23.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:23:23.939: INFO: namespace sched-pred-3733 deletion completed in 6.241229625s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.518 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:23:23.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 24 14:23:23.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4433' Apr 24 14:23:24.120: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 14:23:24.120: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Apr 24 14:23:24.147: INFO: scanned /root for discovery docs: Apr 24 14:23:24.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4433' Apr 24 14:23:39.999: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 24 14:23:39.999: INFO: stdout: "Created e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b\nScaling up e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 24 14:23:40.000: INFO: stdout: "Created e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b\nScaling up e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 24 14:23:40.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4433' Apr 24 14:23:40.121: INFO: stderr: "" Apr 24 14:23:40.121: INFO: stdout: "e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b-8sg9p e2e-test-nginx-rc-cmvw9 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Apr 24 14:23:45.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4433' Apr 24 14:23:45.230: INFO: stderr: "" Apr 24 14:23:45.230: INFO: stdout: "e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b-8sg9p " Apr 24 14:23:45.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b-8sg9p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4433' Apr 24 14:23:45.326: INFO: stderr: "" Apr 24 14:23:45.326: INFO: stdout: "true" Apr 24 14:23:45.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b-8sg9p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4433' Apr 24 14:23:45.427: INFO: stderr: "" Apr 24 14:23:45.427: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 24 14:23:45.427: INFO: e2e-test-nginx-rc-5e07df663a59b2c8569e5b4caee6762b-8sg9p is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 24 14:23:45.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4433' Apr 24 14:23:45.517: INFO: stderr: "" Apr 24 14:23:45.517: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:23:45.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4433" for this suite. Apr 24 14:23:51.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:23:51.620: INFO: namespace kubectl-4433 deletion completed in 6.09984987s • [SLOW TEST:27.681 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:23:51.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 14:23:51.706: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:23:52.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8596" for this suite. Apr 24 14:23:58.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:23:58.899: INFO: namespace custom-resource-definition-8596 deletion completed in 6.097746423s • [SLOW TEST:7.279 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:23:58.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3820 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 24 14:23:58.999: INFO: Found 0 stateful pods, waiting for 3 Apr 24 14:24:09.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 14:24:09.005: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 14:24:09.005: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 24 14:24:09.033: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 24 14:24:19.110: INFO: Updating stateful set ss2 Apr 24 14:24:19.139: INFO: Waiting for Pod statefulset-3820/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 14:24:29.147: INFO: Waiting for Pod statefulset-3820/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 24 14:24:39.288: INFO: Found 2 stateful pods, waiting for 3 Apr 24 14:24:49.293: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 14:24:49.293: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 14:24:49.293: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 24 14:24:49.316: INFO: Updating stateful set ss2 Apr 24 14:24:49.388: INFO: Waiting for Pod statefulset-3820/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 14:24:59.419: INFO: Updating stateful set ss2 Apr 24 14:24:59.451: INFO: Waiting for StatefulSet statefulset-3820/ss2 to complete update Apr 24 14:24:59.451: INFO: Waiting for Pod statefulset-3820/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 24 14:25:09.460: INFO: Waiting for StatefulSet statefulset-3820/ss2 to complete update Apr 24 14:25:09.460: INFO: Waiting for Pod statefulset-3820/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 24 14:25:19.460: INFO: Deleting all statefulset in ns statefulset-3820 Apr 24 14:25:19.463: INFO: Scaling statefulset ss2 to 0 Apr 24 14:25:49.478: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 14:25:49.481: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:25:49.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3820" for this suite. Apr 24 14:25:55.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:25:55.592: INFO: namespace statefulset-3820 deletion completed in 6.094521603s • [SLOW TEST:116.693 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:25:55.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 24 14:26:00.179: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1838 pod-service-account-96803197-bc33-4638-9f02-f9c5d0d9b977 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 24 14:26:00.396: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1838 pod-service-account-96803197-bc33-4638-9f02-f9c5d0d9b977 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 24 14:26:00.589: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1838 pod-service-account-96803197-bc33-4638-9f02-f9c5d0d9b977 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:26:00.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1838" for this suite. Apr 24 14:26:06.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:26:06.917: INFO: namespace svcaccounts-1838 deletion completed in 6.119882258s • [SLOW TEST:11.324 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:26:06.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 24 14:26:06.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3185' Apr 24 14:26:07.189: INFO: stderr: "" Apr 24 14:26:07.189: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 14:26:07.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3185' Apr 24 14:26:07.293: INFO: stderr: "" Apr 24 14:26:07.293: INFO: stdout: "update-demo-nautilus-5mjbk update-demo-nautilus-kjvn5 " Apr 24 14:26:07.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5mjbk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3185' Apr 24 14:26:07.385: INFO: stderr: "" Apr 24 14:26:07.385: INFO: stdout: "" Apr 24 14:26:07.385: INFO: update-demo-nautilus-5mjbk is created but not running Apr 24 14:26:12.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3185' Apr 24 14:26:12.487: INFO: stderr: "" Apr 24 14:26:12.487: INFO: stdout: "update-demo-nautilus-5mjbk update-demo-nautilus-kjvn5 " Apr 24 14:26:12.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5mjbk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3185' Apr 24 14:26:12.590: INFO: stderr: "" Apr 24 14:26:12.590: INFO: stdout: "true" Apr 24 14:26:12.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5mjbk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3185' Apr 24 14:26:12.683: INFO: stderr: "" Apr 24 14:26:12.683: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 14:26:12.683: INFO: validating pod update-demo-nautilus-5mjbk Apr 24 14:26:12.686: INFO: got data: { "image": "nautilus.jpg" } Apr 24 14:26:12.686: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 14:26:12.686: INFO: update-demo-nautilus-5mjbk is verified up and running Apr 24 14:26:12.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjvn5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3185' Apr 24 14:26:12.783: INFO: stderr: "" Apr 24 14:26:12.783: INFO: stdout: "true" Apr 24 14:26:12.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjvn5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3185' Apr 24 14:26:12.874: INFO: stderr: "" Apr 24 14:26:12.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 14:26:12.874: INFO: validating pod update-demo-nautilus-kjvn5 Apr 24 14:26:12.878: INFO: got data: { "image": "nautilus.jpg" } Apr 24 14:26:12.878: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 14:26:12.878: INFO: update-demo-nautilus-kjvn5 is verified up and running STEP: using delete to clean up resources Apr 24 14:26:12.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3185' Apr 24 14:26:12.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 14:26:12.995: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 24 14:26:12.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3185' Apr 24 14:26:13.115: INFO: stderr: "No resources found.\n" Apr 24 14:26:13.115: INFO: stdout: "" Apr 24 14:26:13.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3185 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 14:26:13.231: INFO: stderr: "" Apr 24 14:26:13.231: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:26:13.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3185" for this suite. Apr 24 14:26:35.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:26:35.493: INFO: namespace kubectl-3185 deletion completed in 22.258258586s • [SLOW TEST:28.576 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:26:35.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5df09aec-34c6-4037-9be7-d72e9a96007a STEP: Creating a pod to test consume configMaps Apr 24 14:26:35.687: INFO: Waiting up to 5m0s for pod "pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c" in namespace "configmap-1039" to be "success or failure" Apr 24 14:26:35.702: INFO: Pod "pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.877476ms Apr 24 14:26:37.706: INFO: Pod "pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019513901s Apr 24 14:26:39.710: INFO: Pod "pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023000794s STEP: Saw pod success Apr 24 14:26:39.710: INFO: Pod "pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c" satisfied condition "success or failure" Apr 24 14:26:39.712: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c container configmap-volume-test: STEP: delete the pod Apr 24 14:26:39.752: INFO: Waiting for pod pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c to disappear Apr 24 14:26:39.762: INFO: Pod pod-configmaps-65e505f1-e1e6-4814-8a20-f8b50f2f045c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:26:39.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1039" for this suite. Apr 24 14:26:45.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:26:45.856: INFO: namespace configmap-1039 deletion completed in 6.09095098s • [SLOW TEST:10.363 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:26:45.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 24 14:26:50.467: INFO: Successfully updated pod "labelsupdateea96afe7-25b5-4b57-800b-619f129d0714" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:26:52.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5594" for this suite. Apr 24 14:27:14.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:27:14.576: INFO: namespace downward-api-5594 deletion completed in 22.091886299s • [SLOW TEST:28.718 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:27:14.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-2aa2af7c-b1ba-4659-ac5e-dc374a2dcbfd in namespace container-probe-9439 Apr 24 14:27:18.648: INFO: Started pod busybox-2aa2af7c-b1ba-4659-ac5e-dc374a2dcbfd in namespace container-probe-9439 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 14:27:18.651: INFO: Initial restart count of pod busybox-2aa2af7c-b1ba-4659-ac5e-dc374a2dcbfd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:31:19.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9439" for this suite. Apr 24 14:31:25.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:31:25.633: INFO: namespace container-probe-9439 deletion completed in 6.107976024s • [SLOW TEST:251.056 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:31:25.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 14:31:28.772: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:31:28.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1518" for this suite. Apr 24 14:31:34.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:31:35.040: INFO: namespace container-runtime-1518 deletion completed in 6.093598912s • [SLOW TEST:9.407 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:31:35.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 24 14:31:35.707: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6970" to be "success or failure" Apr 24 14:31:35.758: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 51.363439ms Apr 24 14:31:37.778: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071173001s Apr 24 14:31:39.783: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075763869s STEP: Saw pod success Apr 24 14:31:39.783: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 24 14:31:39.786: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 24 14:31:39.958: INFO: Waiting for pod pod-host-path-test to disappear Apr 24 14:31:39.963: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:31:39.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6970" for this suite. Apr 24 14:31:45.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:31:46.060: INFO: namespace hostpath-6970 deletion completed in 6.094066998s • [SLOW TEST:11.019 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 24 14:31:46.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 24 14:32:08.180: INFO: Container started at 2020-04-24 14:31:48 +0000 UTC, pod became ready at 2020-04-24 14:32:06 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 24 14:32:08.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5728" for this suite. Apr 24 14:32:30.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 24 14:32:30.282: INFO: namespace container-probe-5728 deletion completed in 22.097806689s • [SLOW TEST:44.221 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSApr 24 14:32:30.283: INFO: Running AfterSuite actions on all nodes Apr 24 14:32:30.283: INFO: Running AfterSuite actions on node 1 Apr 24 14:32:30.283: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5806.267 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS