I0210 12:56:00.684050 8 e2e.go:243] Starting e2e run "e052abc8-8136-43af-a99d-65861881ef71" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581339359 - Will randomize all specs Will run 215 of 4412 specs Feb 10 12:56:00.982: INFO: >>> kubeConfig: /root/.kube/config Feb 10 12:56:00.990: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 10 12:56:01.031: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 10 12:56:01.066: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 10 12:56:01.066: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 10 12:56:01.066: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 10 12:56:01.081: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 10 12:56:01.081: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 10 12:56:01.081: INFO: e2e test version: v1.15.7 Feb 10 12:56:01.082: INFO: kube-apiserver version: v1.15.1 SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 12:56:01.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 10 12:56:01.369: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-8baa08eb-57a7-4f80-9430-91ad60f1465c STEP: Creating a pod to test consume secrets Feb 10 12:56:01.591: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd" in namespace "projected-3355" to be "success or failure" Feb 10 12:56:01.612: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.213047ms Feb 10 12:56:03.826: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2351835s Feb 10 12:56:05.835: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243898997s Feb 10 12:56:07.850: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258696617s Feb 10 12:56:09.863: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd": Phase="Running", Reason="", readiness=true. Elapsed: 8.272474955s Feb 10 12:56:11.870: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.279646573s STEP: Saw pod success Feb 10 12:56:11.871: INFO: Pod "pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd" satisfied condition "success or failure" Feb 10 12:56:11.873: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd container projected-secret-volume-test: STEP: delete the pod Feb 10 12:56:12.113: INFO: Waiting for pod pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd to disappear Feb 10 12:56:12.147: INFO: Pod pod-projected-secrets-d59e2fb2-7351-4b96-b4c7-fe8823d8dcfd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 12:56:12.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3355" for this suite. Feb 10 12:56:18.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 12:56:18.476: INFO: namespace projected-3355 deletion completed in 6.322477063s • [SLOW TEST:17.393 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 12:56:18.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-49be87a0-0159-476e-9466-af0601bcc925 STEP: Creating a pod to test consume configMaps Feb 10 12:56:18.776: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20" in namespace "projected-9023" to be "success or failure" Feb 10 12:56:18.797: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Pending", Reason="", readiness=false. Elapsed: 21.384213ms Feb 10 12:56:20.806: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03009218s Feb 10 12:56:22.951: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175336657s Feb 10 12:56:24.961: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185259085s Feb 10 12:56:26.969: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193034913s Feb 10 12:56:28.979: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203378199s Feb 10 12:56:30.984: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.208417371s STEP: Saw pod success Feb 10 12:56:30.984: INFO: Pod "pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20" satisfied condition "success or failure" Feb 10 12:56:30.987: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20 container projected-configmap-volume-test: STEP: delete the pod Feb 10 12:56:31.161: INFO: Waiting for pod pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20 to disappear Feb 10 12:56:31.180: INFO: Pod pod-projected-configmaps-90922c0d-5783-4ae1-84ef-65ded9edec20 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 12:56:31.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9023" for this suite. Feb 10 12:56:37.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 12:56:37.292: INFO: namespace projected-9023 deletion completed in 6.107335817s • [SLOW TEST:18.816 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 12:56:37.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 12:56:47.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8921" for this suite. Feb 10 12:57:39.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 12:57:39.810: INFO: namespace kubelet-test-8921 deletion completed in 52.220223321s • [SLOW TEST:62.518 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 12:57:39.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 10 12:57:49.965: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-70d57746-64b0-4441-a0fb-c7cd30854cb1,GenerateName:,Namespace:events-4869,SelfLink:/api/v1/namespaces/events-4869/pods/send-events-70d57746-64b0-4441-a0fb-c7cd30854cb1,UID:9f40e825-c32e-4187-9698-2dff813f3b6f,ResourceVersion:23817196,Generation:0,CreationTimestamp:2020-02-10 12:57:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 902371377,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zxtkf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zxtkf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zxtkf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d80e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d80e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 12:57:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 12:57:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 12:57:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 12:57:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-10 12:57:40 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-10 12:57:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://e15a00727ca7bd6a8d5884cc533e18515992f305ef231fbe3a895b7eb0917e4a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 10 12:57:51.977: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 10 12:57:53.990: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 12:57:54.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4869" for this suite. Feb 10 12:58:40.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 12:58:40.214: INFO: namespace events-4869 deletion completed in 46.173708923s • [SLOW TEST:60.404 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 12:58:40.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 10 12:58:40.909: INFO: created pod pod-service-account-defaultsa Feb 10 12:58:40.909: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 10 12:58:40.990: INFO: created pod pod-service-account-mountsa Feb 10 12:58:40.990: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 10 12:58:41.028: INFO: created pod pod-service-account-nomountsa Feb 10 12:58:41.028: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 10 12:58:41.045: INFO: created pod pod-service-account-defaultsa-mountspec Feb 10 12:58:41.045: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 10 12:58:41.074: INFO: created pod pod-service-account-mountsa-mountspec Feb 10 12:58:41.075: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 10 12:58:41.180: INFO: created pod pod-service-account-nomountsa-mountspec Feb 10 12:58:41.180: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 10 12:58:41.196: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 10 12:58:41.197: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 10 12:58:42.193: INFO: created pod pod-service-account-mountsa-nomountspec Feb 10 12:58:42.193: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 10 12:58:42.234: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 10 12:58:42.234: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 12:58:42.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6425" for this suite. Feb 10 12:59:15.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 12:59:15.842: INFO: namespace svcaccounts-6425 deletion completed in 33.024197058s • [SLOW TEST:35.628 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 12:59:15.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4623 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4623 STEP: Deleting pre-stop pod Feb 10 12:59:37.145: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 12:59:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4623" for this suite. Feb 10 13:00:17.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:00:17.300: INFO: namespace prestop-4623 deletion completed in 40.13558844s • [SLOW TEST:61.456 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:00:17.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 10 13:00:17.395: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:00:18.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6008" for this suite. Feb 10 13:00:24.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:00:24.756: INFO: namespace custom-resource-definition-6008 deletion completed in 6.219261922s • [SLOW TEST:7.456 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:00:24.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 10 13:00:24.800: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 10 13:00:24.834: INFO: Waiting for terminating namespaces to be deleted... Feb 10 13:00:24.872: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 10 13:00:24.886: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.886: INFO: Container kube-proxy ready: true, restart count 0 Feb 10 13:00:24.886: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 10 13:00:24.886: INFO: Container weave ready: true, restart count 0 Feb 10 13:00:24.886: INFO: Container weave-npc ready: true, restart count 0 Feb 10 13:00:24.886: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 10 13:00:24.904: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container kube-apiserver ready: true, restart count 0 Feb 10 13:00:24.904: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container kube-scheduler ready: true, restart count 13 Feb 10 13:00:24.904: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container coredns ready: true, restart count 0 Feb 10 13:00:24.904: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container etcd ready: true, restart count 0 Feb 10 13:00:24.904: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 10 13:00:24.904: INFO: Container weave ready: true, restart count 0 Feb 10 13:00:24.904: INFO: Container weave-npc ready: true, restart count 0 Feb 10 13:00:24.904: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container coredns ready: true, restart count 0 Feb 10 13:00:24.904: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container kube-controller-manager ready: true, restart count 20 Feb 10 13:00:24.904: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 10 13:00:24.904: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f20c02b4b2cbaf], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:00:25.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9137" for this suite. Feb 10 13:00:31.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:00:32.091: INFO: namespace sched-pred-9137 deletion completed in 6.15671039s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.335 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:00:32.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 10 13:00:32.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995" in namespace "projected-1525" to be "success or failure" Feb 10 13:00:32.267: INFO: Pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995": Phase="Pending", Reason="", readiness=false. Elapsed: 70.356038ms Feb 10 13:00:34.285: INFO: Pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088920311s Feb 10 13:00:36.308: INFO: Pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112102774s Feb 10 13:00:38.356: INFO: Pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159895674s Feb 10 13:00:40.371: INFO: Pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.174395048s STEP: Saw pod success Feb 10 13:00:40.371: INFO: Pod "downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995" satisfied condition "success or failure" Feb 10 13:00:40.376: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995 container client-container: STEP: delete the pod Feb 10 13:00:40.674: INFO: Waiting for pod downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995 to disappear Feb 10 13:00:40.792: INFO: Pod downwardapi-volume-8911bd34-bf1d-4cfd-9395-9d0bd45c8995 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:00:40.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1525" for this suite. Feb 10 13:00:46.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:00:47.066: INFO: namespace projected-1525 deletion completed in 6.265736201s • [SLOW TEST:14.975 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:00:47.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 10 13:00:47.258: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817641,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 10 13:00:47.259: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817641,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 10 13:00:57.280: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817655,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 10 13:00:57.280: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817655,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 10 13:01:07.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817669,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 10 13:01:07.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817669,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 10 13:01:17.305: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817683,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 10 13:01:17.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-a,UID:6c926d9f-0a73-4dc2-8c22-ed14715baf9b,ResourceVersion:23817683,Generation:0,CreationTimestamp:2020-02-10 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 10 13:01:27.325: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-b,UID:c6cc54a4-993e-461c-859d-a6aef582655c,ResourceVersion:23817698,Generation:0,CreationTimestamp:2020-02-10 13:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 10 13:01:27.326: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-b,UID:c6cc54a4-993e-461c-859d-a6aef582655c,ResourceVersion:23817698,Generation:0,CreationTimestamp:2020-02-10 13:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 10 13:01:37.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-b,UID:c6cc54a4-993e-461c-859d-a6aef582655c,ResourceVersion:23817712,Generation:0,CreationTimestamp:2020-02-10 13:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 10 13:01:37.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5255,SelfLink:/api/v1/namespaces/watch-5255/configmaps/e2e-watch-test-configmap-b,UID:c6cc54a4-993e-461c-859d-a6aef582655c,ResourceVersion:23817712,Generation:0,CreationTimestamp:2020-02-10 13:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:01:47.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5255" for this suite. Feb 10 13:01:53.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:01:53.581: INFO: namespace watch-5255 deletion completed in 6.229803089s • [SLOW TEST:66.514 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:01:53.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 10 13:02:02.416: INFO: Successfully updated pod "annotationupdate2d20aff9-25fc-481f-bea1-b08f7a0a85e4" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:02:04.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8471" for this suite. Feb 10 13:02:26.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:02:26.702: INFO: namespace projected-8471 deletion completed in 22.132758099s • [SLOW TEST:33.120 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:02:26.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 10 13:05:28.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:28.076: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:30.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:30.081: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:32.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:32.084: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:34.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:34.083: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:36.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:36.084: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:38.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:38.088: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:40.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:40.082: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:42.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:42.086: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:44.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:44.084: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:46.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:46.085: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:48.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:48.088: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:50.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:50.083: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:52.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:52.107: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:54.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:54.088: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:56.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:56.084: INFO: Pod pod-with-poststart-exec-hook still exists Feb 10 13:05:58.076: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 10 13:05:58.086: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:05:58.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8784" for this suite. Feb 10 13:06:20.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:06:20.303: INFO: namespace container-lifecycle-hook-8784 deletion completed in 22.210890292s • [SLOW TEST:233.601 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:06:20.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 10 13:06:20.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4" in namespace "projected-6975" to be "success or failure" Feb 10 13:06:20.446: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.432195ms Feb 10 13:06:22.456: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06892888s Feb 10 13:06:24.466: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079362186s Feb 10 13:06:26.483: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095897062s Feb 10 13:06:28.500: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112500875s Feb 10 13:06:30.510: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123172267s Feb 10 13:06:32.526: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.138559766s STEP: Saw pod success Feb 10 13:06:32.526: INFO: Pod "downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4" satisfied condition "success or failure" Feb 10 13:06:32.541: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4 container client-container: STEP: delete the pod Feb 10 13:06:32.680: INFO: Waiting for pod downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4 to disappear Feb 10 13:06:32.693: INFO: Pod downwardapi-volume-ae8d6a59-97ee-4465-8655-7536ed34e7b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:06:32.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6975" for this suite. Feb 10 13:06:38.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:06:38.872: INFO: namespace projected-6975 deletion completed in 6.167564424s • [SLOW TEST:18.569 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:06:38.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 10 13:06:39.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266" in namespace "downward-api-5826" to be "success or failure" Feb 10 13:06:39.108: INFO: Pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266": Phase="Pending", Reason="", readiness=false. Elapsed: 62.817912ms Feb 10 13:06:41.114: INFO: Pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069370492s Feb 10 13:06:43.125: INFO: Pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080334645s Feb 10 13:06:45.139: INFO: Pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094168227s Feb 10 13:06:47.152: INFO: Pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106458306s STEP: Saw pod success Feb 10 13:06:47.152: INFO: Pod "downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266" satisfied condition "success or failure" Feb 10 13:06:47.158: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266 container client-container: STEP: delete the pod Feb 10 13:06:47.536: INFO: Waiting for pod downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266 to disappear Feb 10 13:06:47.572: INFO: Pod downwardapi-volume-60674b6e-bf3b-4d58-a56b-4da565678266 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:06:47.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5826" for this suite. Feb 10 13:06:53.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:06:53.730: INFO: namespace downward-api-5826 deletion completed in 6.146255477s • [SLOW TEST:14.858 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:06:53.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5754552b-f52d-45ca-bcff-949c1d533d9a STEP: Creating a pod to test consume configMaps Feb 10 13:06:54.024: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff" in namespace "projected-7161" to be "success or failure" Feb 10 13:06:54.054: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff": Phase="Pending", Reason="", readiness=false. Elapsed: 30.559062ms Feb 10 13:06:56.069: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04511123s Feb 10 13:06:58.078: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054073229s Feb 10 13:07:00.092: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067862602s Feb 10 13:07:02.100: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075977964s Feb 10 13:07:04.110: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086029508s STEP: Saw pod success Feb 10 13:07:04.110: INFO: Pod "pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff" satisfied condition "success or failure" Feb 10 13:07:04.117: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff container projected-configmap-volume-test: STEP: delete the pod Feb 10 13:07:04.241: INFO: Waiting for pod pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff to disappear Feb 10 13:07:04.247: INFO: Pod pod-projected-configmaps-2c1f5393-53e2-4b56-9d40-f03c2c9766ff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:07:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7161" for this suite. Feb 10 13:07:10.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:07:10.588: INFO: namespace projected-7161 deletion completed in 6.333530533s • [SLOW TEST:16.857 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:07:10.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 10 13:07:28.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:28.994: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:30.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:31.005: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:32.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:33.005: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:34.996: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:35.012: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:36.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:37.003: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:38.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:39.003: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:40.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:41.792: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:42.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:43.001: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:44.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:45.003: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:46.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:49.466: INFO: Pod pod-with-prestop-http-hook still exists Feb 10 13:07:50.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 10 13:07:51.001: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:07:51.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4341" for this suite. Feb 10 13:08:13.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:08:13.191: INFO: namespace container-lifecycle-hook-4341 deletion completed in 22.157519766s • [SLOW TEST:62.602 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:08:13.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 10 13:08:13.312: INFO: Waiting up to 5m0s for pod "pod-d74691a6-2226-4117-9378-8a3cc511c922" in namespace "emptydir-5740" to be "success or failure" Feb 10 13:08:13.342: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Pending", Reason="", readiness=false. Elapsed: 30.296185ms Feb 10 13:08:15.355: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043081897s Feb 10 13:08:17.366: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053756884s Feb 10 13:08:19.378: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066244554s Feb 10 13:08:21.388: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076029618s Feb 10 13:08:23.400: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088554286s Feb 10 13:08:25.408: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.096155829s STEP: Saw pod success Feb 10 13:08:25.408: INFO: Pod "pod-d74691a6-2226-4117-9378-8a3cc511c922" satisfied condition "success or failure" Feb 10 13:08:25.411: INFO: Trying to get logs from node iruya-node pod pod-d74691a6-2226-4117-9378-8a3cc511c922 container test-container: STEP: delete the pod Feb 10 13:08:25.492: INFO: Waiting for pod pod-d74691a6-2226-4117-9378-8a3cc511c922 to disappear Feb 10 13:08:25.503: INFO: Pod pod-d74691a6-2226-4117-9378-8a3cc511c922 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:08:25.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5740" for this suite. Feb 10 13:08:31.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:08:31.688: INFO: namespace emptydir-5740 deletion completed in 6.180107373s • [SLOW TEST:18.497 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:08:31.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 10 13:08:31.832: INFO: Waiting up to 5m0s for pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f" in namespace "emptydir-7108" to be "success or failure" Feb 10 13:08:31.850: INFO: Pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.507658ms Feb 10 13:08:33.870: INFO: Pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038706365s Feb 10 13:08:35.885: INFO: Pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053488578s Feb 10 13:08:37.900: INFO: Pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068118275s Feb 10 13:08:39.910: INFO: Pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07859726s STEP: Saw pod success Feb 10 13:08:39.910: INFO: Pod "pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f" satisfied condition "success or failure" Feb 10 13:08:39.916: INFO: Trying to get logs from node iruya-node pod pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f container test-container: STEP: delete the pod Feb 10 13:08:39.998: INFO: Waiting for pod pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f to disappear Feb 10 13:08:40.006: INFO: Pod pod-5f4783d2-607c-4dd9-aef7-6bc28a84433f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 10 13:08:40.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7108" for this suite. Feb 10 13:08:46.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 10 13:08:46.259: INFO: namespace emptydir-7108 deletion completed in 6.214425645s • [SLOW TEST:14.571 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 10 13:08:46.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 10 13:08:46.522: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.229703ms)
Feb 10 13:08:46.527: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.26916ms)
Feb 10 13:08:46.535: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.237127ms)
Feb 10 13:08:46.539: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.39259ms)
Feb 10 13:08:46.544: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.200174ms)
Feb 10 13:08:46.550: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.312129ms)
Feb 10 13:08:46.555: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.038162ms)
Feb 10 13:08:46.559: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.077529ms)
Feb 10 13:08:46.564: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.735763ms)
Feb 10 13:08:46.568: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.3492ms)
Feb 10 13:08:46.572: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.727292ms)
Feb 10 13:08:46.575: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.161467ms)
Feb 10 13:08:46.579: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.748552ms)
Feb 10 13:08:46.582: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.573162ms)
Feb 10 13:08:46.586: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.074247ms)
Feb 10 13:08:46.590: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.142086ms)
Feb 10 13:08:46.596: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.468382ms)
Feb 10 13:08:46.602: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.453108ms)
Feb 10 13:08:46.606: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.779997ms)
Feb 10 13:08:46.610: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.790055ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:08:46.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9534" for this suite.
Feb 10 13:08:52.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:08:52.840: INFO: namespace proxy-9534 deletion completed in 6.227192509s

• [SLOW TEST:6.582 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:08:52.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:08:59.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4483" for this suite.
Feb 10 13:09:05.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:09:05.505: INFO: namespace namespaces-4483 deletion completed in 6.218430716s
STEP: Destroying namespace "nsdeletetest-7298" for this suite.
Feb 10 13:09:05.507: INFO: Namespace nsdeletetest-7298 was already deleted
STEP: Destroying namespace "nsdeletetest-183" for this suite.
Feb 10 13:09:11.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:09:11.680: INFO: namespace nsdeletetest-183 deletion completed in 6.173010307s

• [SLOW TEST:18.839 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:09:11.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-35267c37-9739-43ac-b849-7dedf8e97c1c
STEP: Creating a pod to test consume configMaps
Feb 10 13:09:11.831: INFO: Waiting up to 5m0s for pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9" in namespace "configmap-7935" to be "success or failure"
Feb 10 13:09:11.841: INFO: Pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.69693ms
Feb 10 13:09:13.854: INFO: Pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022695243s
Feb 10 13:09:15.862: INFO: Pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030742556s
Feb 10 13:09:17.874: INFO: Pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042695818s
Feb 10 13:09:19.881: INFO: Pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049567638s
STEP: Saw pod success
Feb 10 13:09:19.881: INFO: Pod "pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9" satisfied condition "success or failure"
Feb 10 13:09:19.885: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9 container configmap-volume-test: 
STEP: delete the pod
Feb 10 13:09:19.968: INFO: Waiting for pod pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9 to disappear
Feb 10 13:09:19.979: INFO: Pod pod-configmaps-c42bec8f-7c89-476b-9059-805f1ba36cb9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:09:19.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7935" for this suite.
Feb 10 13:09:26.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:09:26.236: INFO: namespace configmap-7935 deletion completed in 6.174019473s

• [SLOW TEST:14.556 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:09:26.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-19148799-fe9c-4460-983c-b977a95370da
STEP: Creating a pod to test consume secrets
Feb 10 13:09:26.396: INFO: Waiting up to 5m0s for pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460" in namespace "secrets-811" to be "success or failure"
Feb 10 13:09:26.461: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460": Phase="Pending", Reason="", readiness=false. Elapsed: 64.902816ms
Feb 10 13:09:28.477: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0812111s
Feb 10 13:09:30.492: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095762348s
Feb 10 13:09:32.510: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114100997s
Feb 10 13:09:34.521: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124785376s
Feb 10 13:09:36.532: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136423809s
STEP: Saw pod success
Feb 10 13:09:36.532: INFO: Pod "pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460" satisfied condition "success or failure"
Feb 10 13:09:36.536: INFO: Trying to get logs from node iruya-node pod pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460 container secret-volume-test: 
STEP: delete the pod
Feb 10 13:09:36.908: INFO: Waiting for pod pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460 to disappear
Feb 10 13:09:36.926: INFO: Pod pod-secrets-3de33283-cc3d-4541-9671-70b61e77b460 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:09:36.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-811" for this suite.
Feb 10 13:09:42.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:09:43.078: INFO: namespace secrets-811 deletion completed in 6.142595708s

• [SLOW TEST:16.841 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:09:43.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 10 13:09:43.185: INFO: Waiting up to 5m0s for pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60" in namespace "emptydir-1097" to be "success or failure"
Feb 10 13:09:43.189: INFO: Pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821848ms
Feb 10 13:09:45.195: INFO: Pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009902323s
Feb 10 13:09:47.206: INFO: Pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021302345s
Feb 10 13:09:49.212: INFO: Pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027062639s
Feb 10 13:09:51.228: INFO: Pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043225362s
STEP: Saw pod success
Feb 10 13:09:51.228: INFO: Pod "pod-b52cbcc2-895b-4602-bd3b-980a436dec60" satisfied condition "success or failure"
Feb 10 13:09:51.231: INFO: Trying to get logs from node iruya-node pod pod-b52cbcc2-895b-4602-bd3b-980a436dec60 container test-container: 
STEP: delete the pod
Feb 10 13:09:51.558: INFO: Waiting for pod pod-b52cbcc2-895b-4602-bd3b-980a436dec60 to disappear
Feb 10 13:09:51.573: INFO: Pod pod-b52cbcc2-895b-4602-bd3b-980a436dec60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:09:51.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1097" for this suite.
Feb 10 13:09:57.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:09:57.704: INFO: namespace emptydir-1097 deletion completed in 6.124142535s

• [SLOW TEST:14.625 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:09:57.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 10 13:09:57.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6701'
Feb 10 13:09:59.892: INFO: stderr: ""
Feb 10 13:09:59.892: INFO: stdout: "pod/pause created\n"
Feb 10 13:09:59.892: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 10 13:09:59.893: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6701" to be "running and ready"
Feb 10 13:09:59.991: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 98.247836ms
Feb 10 13:10:02.003: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109964379s
Feb 10 13:10:04.015: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121974842s
Feb 10 13:10:06.032: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139895859s
Feb 10 13:10:08.054: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160923229s
Feb 10 13:10:10.062: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.169102249s
Feb 10 13:10:10.062: INFO: Pod "pause" satisfied condition "running and ready"
Feb 10 13:10:10.062: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 10 13:10:10.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6701'
Feb 10 13:10:10.231: INFO: stderr: ""
Feb 10 13:10:10.231: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 10 13:10:10.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6701'
Feb 10 13:10:10.355: INFO: stderr: ""
Feb 10 13:10:10.355: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 10 13:10:10.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6701'
Feb 10 13:10:10.461: INFO: stderr: ""
Feb 10 13:10:10.461: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 10 13:10:10.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6701'
Feb 10 13:10:10.590: INFO: stderr: ""
Feb 10 13:10:10.590: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 10 13:10:10.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6701'
Feb 10 13:10:10.739: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 13:10:10.739: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 10 13:10:10.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6701'
Feb 10 13:10:10.907: INFO: stderr: "No resources found.\n"
Feb 10 13:10:10.907: INFO: stdout: ""
Feb 10 13:10:10.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6701 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 10 13:10:11.081: INFO: stderr: ""
Feb 10 13:10:11.081: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:10:11.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6701" for this suite.
Feb 10 13:10:17.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:10:17.229: INFO: namespace kubectl-6701 deletion completed in 6.142790456s

• [SLOW TEST:19.525 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:10:17.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:10:17.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3306" for this suite.
Feb 10 13:10:41.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:10:41.891: INFO: namespace pods-3306 deletion completed in 24.407130304s

• [SLOW TEST:24.662 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:10:41.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:10:50.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4892" for this suite.
Feb 10 13:10:56.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:10:56.442: INFO: namespace emptydir-wrapper-4892 deletion completed in 6.198071882s

• [SLOW TEST:14.550 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:10:56.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:10:56.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec" in namespace "downward-api-2311" to be "success or failure"
Feb 10 13:10:56.623: INFO: Pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec": Phase="Pending", Reason="", readiness=false. Elapsed: 21.065369ms
Feb 10 13:10:58.643: INFO: Pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040970897s
Feb 10 13:11:00.653: INFO: Pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050553126s
Feb 10 13:11:02.665: INFO: Pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062636626s
Feb 10 13:11:04.680: INFO: Pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077703828s
STEP: Saw pod success
Feb 10 13:11:04.680: INFO: Pod "downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec" satisfied condition "success or failure"
Feb 10 13:11:04.695: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec container client-container: 
STEP: delete the pod
Feb 10 13:11:05.085: INFO: Waiting for pod downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec to disappear
Feb 10 13:11:05.095: INFO: Pod downwardapi-volume-05dc849b-a7b5-4dff-86c8-90a7cbe2a8ec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:11:05.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2311" for this suite.
Feb 10 13:11:11.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:11:11.281: INFO: namespace downward-api-2311 deletion completed in 6.180702479s

• [SLOW TEST:14.839 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:11:11.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 10 13:11:27.511: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:27.526: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:29.526: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:29.535: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:31.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:31.534: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:33.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:33.533: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:35.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:35.538: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:37.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:37.535: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:39.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:39.536: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:41.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:41.537: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:43.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:43.533: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:45.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:45.540: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:47.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:47.534: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 10 13:11:49.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 10 13:11:49.539: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:11:49.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2695" for this suite.
Feb 10 13:12:11.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:12:11.760: INFO: namespace container-lifecycle-hook-2695 deletion completed in 22.174367277s

• [SLOW TEST:60.478 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:12:11.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5c9193fa-6195-467e-ae87-e195e8236023
STEP: Creating secret with name s-test-opt-upd-b9485353-5c26-4a32-8d76-fa48af995200
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5c9193fa-6195-467e-ae87-e195e8236023
STEP: Updating secret s-test-opt-upd-b9485353-5c26-4a32-8d76-fa48af995200
STEP: Creating secret with name s-test-opt-create-bcac96be-54be-41b3-be18-80d41fbdf1f1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:13:44.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1846" for this suite.
Feb 10 13:14:08.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:14:08.481: INFO: namespace projected-1846 deletion completed in 24.179394321s

• [SLOW TEST:116.721 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:14:08.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 10 13:14:08.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5312 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 10 13:14:16.410: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0210 13:14:15.294780     194 log.go:172] (0xc0007e6790) (0xc00052af00) Create stream\nI0210 13:14:15.294887     194 log.go:172] (0xc0007e6790) (0xc00052af00) Stream added, broadcasting: 1\nI0210 13:14:15.307206     194 log.go:172] (0xc0007e6790) Reply frame received for 1\nI0210 13:14:15.307331     194 log.go:172] (0xc0007e6790) (0xc0009d6000) Create stream\nI0210 13:14:15.307380     194 log.go:172] (0xc0007e6790) (0xc0009d6000) Stream added, broadcasting: 3\nI0210 13:14:15.309856     194 log.go:172] (0xc0007e6790) Reply frame received for 3\nI0210 13:14:15.309895     194 log.go:172] (0xc0007e6790) (0xc00052afa0) Create stream\nI0210 13:14:15.309904     194 log.go:172] (0xc0007e6790) (0xc00052afa0) Stream added, broadcasting: 5\nI0210 13:14:15.311603     194 log.go:172] (0xc0007e6790) Reply frame received for 5\nI0210 13:14:15.311649     194 log.go:172] (0xc0007e6790) (0xc0005a2500) Create stream\nI0210 13:14:15.311669     194 log.go:172] (0xc0007e6790) (0xc0005a2500) Stream added, broadcasting: 7\nI0210 13:14:15.313970     194 log.go:172] (0xc0007e6790) Reply frame received for 7\nI0210 13:14:15.315154     194 log.go:172] (0xc0009d6000) (3) Writing data frame\nI0210 13:14:15.315680     194 log.go:172] (0xc0009d6000) (3) Writing data frame\nI0210 13:14:15.328584     194 log.go:172] (0xc0007e6790) Data frame received for 5\nI0210 13:14:15.328626     194 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0210 13:14:15.328655     194 log.go:172] (0xc00052afa0) (5) Data frame sent\nI0210 13:14:15.333434     194 log.go:172] (0xc0007e6790) Data frame received for 5\nI0210 13:14:15.333472     194 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0210 13:14:15.333492     194 log.go:172] (0xc00052afa0) (5) Data frame sent\nI0210 13:14:16.375954     194 log.go:172] (0xc0007e6790) Data frame received for 1\nI0210 13:14:16.376185     194 log.go:172] (0xc0007e6790) (0xc00052afa0) Stream removed, broadcasting: 5\nI0210 13:14:16.376354     194 log.go:172] (0xc00052af00) (1) Data frame handling\nI0210 13:14:16.376408     194 log.go:172] (0xc00052af00) (1) Data frame sent\nI0210 13:14:16.376452     194 log.go:172] (0xc0007e6790) (0xc0009d6000) Stream removed, broadcasting: 3\nI0210 13:14:16.376498     194 log.go:172] (0xc0007e6790) (0xc00052af00) Stream removed, broadcasting: 1\nI0210 13:14:16.376640     194 log.go:172] (0xc0007e6790) (0xc0005a2500) Stream removed, broadcasting: 7\nI0210 13:14:16.376766     194 log.go:172] (0xc0007e6790) Go away received\nI0210 13:14:16.376808     194 log.go:172] (0xc0007e6790) (0xc00052af00) Stream removed, broadcasting: 1\nI0210 13:14:16.377068     194 log.go:172] (0xc0007e6790) (0xc0009d6000) Stream removed, broadcasting: 3\nI0210 13:14:16.377103     194 log.go:172] (0xc0007e6790) (0xc00052afa0) Stream removed, broadcasting: 5\nI0210 13:14:16.377123     194 log.go:172] (0xc0007e6790) (0xc0005a2500) Stream removed, broadcasting: 7\n"
Feb 10 13:14:16.411: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:14:18.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5312" for this suite.
Feb 10 13:14:24.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:14:24.624: INFO: namespace kubectl-5312 deletion completed in 6.189875965s

• [SLOW TEST:16.143 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:14:24.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6
Feb 10 13:14:24.775: INFO: Pod name my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6: Found 0 pods out of 1
Feb 10 13:14:29.784: INFO: Pod name my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6: Found 1 pods out of 1
Feb 10 13:14:29.784: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6" are running
Feb 10 13:14:31.800: INFO: Pod "my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6-h9lwq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 13:14:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 13:14:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 13:14:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 13:14:24 +0000 UTC Reason: Message:}])
Feb 10 13:14:31.800: INFO: Trying to dial the pod
Feb 10 13:14:36.834: INFO: Controller my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6: Got expected result from replica 1 [my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6-h9lwq]: "my-hostname-basic-9018131a-bae0-4d89-a5e2-dacf4c7d2dd6-h9lwq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:14:36.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-955" for this suite.
Feb 10 13:14:44.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:14:45.038: INFO: namespace replication-controller-955 deletion completed in 8.198015711s

• [SLOW TEST:20.414 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:14:45.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5899
I0210 13:14:45.132682       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5899, replica count: 1
I0210 13:14:46.183284       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:47.183723       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:48.184010       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:49.184583       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:50.184965       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:51.185425       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:52.185999       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 13:14:53.186482       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 10 13:14:53.331: INFO: Created: latency-svc-frg72
Feb 10 13:14:53.345: INFO: Got endpoints: latency-svc-frg72 [58.420703ms]
Feb 10 13:14:53.457: INFO: Created: latency-svc-h5l8s
Feb 10 13:14:53.474: INFO: Got endpoints: latency-svc-h5l8s [129.389534ms]
Feb 10 13:14:53.519: INFO: Created: latency-svc-ndx87
Feb 10 13:14:53.673: INFO: Got endpoints: latency-svc-ndx87 [328.371952ms]
Feb 10 13:14:53.735: INFO: Created: latency-svc-cm8x8
Feb 10 13:14:53.741: INFO: Got endpoints: latency-svc-cm8x8 [395.669437ms]
Feb 10 13:14:53.831: INFO: Created: latency-svc-98t65
Feb 10 13:14:53.892: INFO: Got endpoints: latency-svc-98t65 [546.024409ms]
Feb 10 13:14:53.914: INFO: Created: latency-svc-pctd9
Feb 10 13:14:53.991: INFO: Got endpoints: latency-svc-pctd9 [645.669181ms]
Feb 10 13:14:54.026: INFO: Created: latency-svc-pkvh4
Feb 10 13:14:54.035: INFO: Got endpoints: latency-svc-pkvh4 [689.443538ms]
Feb 10 13:14:54.085: INFO: Created: latency-svc-979f6
Feb 10 13:14:54.190: INFO: Got endpoints: latency-svc-979f6 [844.27896ms]
Feb 10 13:14:54.227: INFO: Created: latency-svc-5rqqg
Feb 10 13:14:54.245: INFO: Got endpoints: latency-svc-5rqqg [899.310928ms]
Feb 10 13:14:54.380: INFO: Created: latency-svc-npcq6
Feb 10 13:14:54.388: INFO: Got endpoints: latency-svc-npcq6 [1.042097348s]
Feb 10 13:14:54.445: INFO: Created: latency-svc-jzksm
Feb 10 13:14:54.546: INFO: Got endpoints: latency-svc-jzksm [1.20031772s]
Feb 10 13:14:54.585: INFO: Created: latency-svc-84mpr
Feb 10 13:14:54.601: INFO: Got endpoints: latency-svc-84mpr [1.255486762s]
Feb 10 13:14:54.745: INFO: Created: latency-svc-z9dvg
Feb 10 13:14:54.757: INFO: Got endpoints: latency-svc-z9dvg [211.014383ms]
Feb 10 13:14:54.878: INFO: Created: latency-svc-c5gnd
Feb 10 13:14:54.905: INFO: Got endpoints: latency-svc-c5gnd [1.559251377s]
Feb 10 13:14:55.056: INFO: Created: latency-svc-jqhnd
Feb 10 13:14:55.095: INFO: Got endpoints: latency-svc-jqhnd [1.749365015s]
Feb 10 13:14:55.254: INFO: Created: latency-svc-s24cb
Feb 10 13:14:55.260: INFO: Got endpoints: latency-svc-s24cb [1.914645352s]
Feb 10 13:14:55.330: INFO: Created: latency-svc-2s24k
Feb 10 13:14:55.343: INFO: Got endpoints: latency-svc-2s24k [1.997146877s]
Feb 10 13:14:55.438: INFO: Created: latency-svc-dsxq6
Feb 10 13:14:55.477: INFO: Got endpoints: latency-svc-dsxq6 [2.002263891s]
Feb 10 13:14:55.492: INFO: Created: latency-svc-nlght
Feb 10 13:14:55.508: INFO: Got endpoints: latency-svc-nlght [1.834806489s]
Feb 10 13:14:55.680: INFO: Created: latency-svc-5dn9q
Feb 10 13:14:55.698: INFO: Got endpoints: latency-svc-5dn9q [1.957113095s]
Feb 10 13:14:55.765: INFO: Created: latency-svc-l65lk
Feb 10 13:14:55.852: INFO: Created: latency-svc-q8q8h
Feb 10 13:14:55.854: INFO: Got endpoints: latency-svc-l65lk [1.961911078s]
Feb 10 13:14:55.862: INFO: Got endpoints: latency-svc-q8q8h [1.87050441s]
Feb 10 13:14:56.065: INFO: Created: latency-svc-6lt4m
Feb 10 13:14:56.078: INFO: Got endpoints: latency-svc-6lt4m [2.042761446s]
Feb 10 13:14:56.138: INFO: Created: latency-svc-zm7qf
Feb 10 13:14:56.150: INFO: Got endpoints: latency-svc-zm7qf [1.959611472s]
Feb 10 13:14:56.419: INFO: Created: latency-svc-4zb22
Feb 10 13:14:57.009: INFO: Got endpoints: latency-svc-4zb22 [2.763532497s]
Feb 10 13:14:57.117: INFO: Created: latency-svc-s25fn
Feb 10 13:14:57.118: INFO: Got endpoints: latency-svc-s25fn [2.729917624s]
Feb 10 13:14:57.291: INFO: Created: latency-svc-xgz57
Feb 10 13:14:57.329: INFO: Got endpoints: latency-svc-xgz57 [2.72747419s]
Feb 10 13:14:57.464: INFO: Created: latency-svc-7kg9x
Feb 10 13:14:57.492: INFO: Got endpoints: latency-svc-7kg9x [2.734779931s]
Feb 10 13:14:57.606: INFO: Created: latency-svc-c6l5c
Feb 10 13:14:57.618: INFO: Got endpoints: latency-svc-c6l5c [2.713396219s]
Feb 10 13:14:57.675: INFO: Created: latency-svc-67pgg
Feb 10 13:14:57.683: INFO: Got endpoints: latency-svc-67pgg [2.587743558s]
Feb 10 13:14:57.817: INFO: Created: latency-svc-r6v7l
Feb 10 13:14:57.818: INFO: Got endpoints: latency-svc-r6v7l [2.557137341s]
Feb 10 13:14:57.876: INFO: Created: latency-svc-cvr8q
Feb 10 13:14:57.940: INFO: Got endpoints: latency-svc-cvr8q [2.597066584s]
Feb 10 13:14:58.000: INFO: Created: latency-svc-lbl94
Feb 10 13:14:58.002: INFO: Got endpoints: latency-svc-lbl94 [2.52468721s]
Feb 10 13:14:58.110: INFO: Created: latency-svc-sgwb4
Feb 10 13:14:58.118: INFO: Got endpoints: latency-svc-sgwb4 [2.609631231s]
Feb 10 13:14:58.161: INFO: Created: latency-svc-gr9cj
Feb 10 13:14:58.208: INFO: Got endpoints: latency-svc-gr9cj [2.50949276s]
Feb 10 13:14:58.264: INFO: Created: latency-svc-dgzlq
Feb 10 13:14:58.277: INFO: Got endpoints: latency-svc-dgzlq [2.422778103s]
Feb 10 13:14:58.348: INFO: Created: latency-svc-bhbs4
Feb 10 13:14:58.420: INFO: Got endpoints: latency-svc-bhbs4 [2.558081884s]
Feb 10 13:14:58.421: INFO: Created: latency-svc-4kwf6
Feb 10 13:14:58.433: INFO: Got endpoints: latency-svc-4kwf6 [2.35502762s]
Feb 10 13:14:58.507: INFO: Created: latency-svc-pspnw
Feb 10 13:14:58.612: INFO: Got endpoints: latency-svc-pspnw [2.461846344s]
Feb 10 13:14:58.633: INFO: Created: latency-svc-6v769
Feb 10 13:14:58.655: INFO: Got endpoints: latency-svc-6v769 [1.645951506s]
Feb 10 13:14:58.812: INFO: Created: latency-svc-cxt6x
Feb 10 13:14:58.818: INFO: Got endpoints: latency-svc-cxt6x [1.700444681s]
Feb 10 13:14:58.896: INFO: Created: latency-svc-x69hq
Feb 10 13:14:58.896: INFO: Got endpoints: latency-svc-x69hq [1.566881517s]
Feb 10 13:14:58.969: INFO: Created: latency-svc-tcgmz
Feb 10 13:14:58.980: INFO: Got endpoints: latency-svc-tcgmz [1.487916033s]
Feb 10 13:14:59.022: INFO: Created: latency-svc-lhtn7
Feb 10 13:14:59.033: INFO: Got endpoints: latency-svc-lhtn7 [1.414256591s]
Feb 10 13:14:59.149: INFO: Created: latency-svc-t9vgp
Feb 10 13:14:59.159: INFO: Got endpoints: latency-svc-t9vgp [1.475253371s]
Feb 10 13:14:59.336: INFO: Created: latency-svc-tbsl4
Feb 10 13:14:59.337: INFO: Got endpoints: latency-svc-tbsl4 [1.518977206s]
Feb 10 13:14:59.563: INFO: Created: latency-svc-rkqxz
Feb 10 13:14:59.567: INFO: Got endpoints: latency-svc-rkqxz [1.627231993s]
Feb 10 13:14:59.645: INFO: Created: latency-svc-5hz6d
Feb 10 13:14:59.745: INFO: Got endpoints: latency-svc-5hz6d [1.742757331s]
Feb 10 13:14:59.808: INFO: Created: latency-svc-6lr5w
Feb 10 13:14:59.821: INFO: Got endpoints: latency-svc-6lr5w [1.702235369s]
Feb 10 13:14:59.946: INFO: Created: latency-svc-69pjk
Feb 10 13:14:59.980: INFO: Got endpoints: latency-svc-69pjk [1.772096133s]
Feb 10 13:15:00.018: INFO: Created: latency-svc-tqr7b
Feb 10 13:15:00.101: INFO: Got endpoints: latency-svc-tqr7b [1.824425032s]
Feb 10 13:15:00.120: INFO: Created: latency-svc-nmm2b
Feb 10 13:15:00.154: INFO: Got endpoints: latency-svc-nmm2b [1.73316061s]
Feb 10 13:15:00.202: INFO: Created: latency-svc-krh5x
Feb 10 13:15:00.270: INFO: Got endpoints: latency-svc-krh5x [1.837066333s]
Feb 10 13:15:00.318: INFO: Created: latency-svc-ll8gm
Feb 10 13:15:00.321: INFO: Got endpoints: latency-svc-ll8gm [1.708804173s]
Feb 10 13:15:00.505: INFO: Created: latency-svc-lxrtj
Feb 10 13:15:00.511: INFO: Got endpoints: latency-svc-lxrtj [1.855939047s]
Feb 10 13:15:00.606: INFO: Created: latency-svc-5hndl
Feb 10 13:15:00.661: INFO: Got endpoints: latency-svc-5hndl [1.842640246s]
Feb 10 13:15:00.731: INFO: Created: latency-svc-q9dpx
Feb 10 13:15:00.742: INFO: Got endpoints: latency-svc-q9dpx [1.845564263s]
Feb 10 13:15:00.851: INFO: Created: latency-svc-f249g
Feb 10 13:15:00.866: INFO: Got endpoints: latency-svc-f249g [1.885392616s]
Feb 10 13:15:00.927: INFO: Created: latency-svc-f7t9q
Feb 10 13:15:01.007: INFO: Got endpoints: latency-svc-f7t9q [1.974095771s]
Feb 10 13:15:01.042: INFO: Created: latency-svc-pk927
Feb 10 13:15:01.101: INFO: Created: latency-svc-hfl44
Feb 10 13:15:01.103: INFO: Got endpoints: latency-svc-pk927 [1.943895211s]
Feb 10 13:15:01.309: INFO: Got endpoints: latency-svc-hfl44 [1.972349709s]
Feb 10 13:15:01.408: INFO: Created: latency-svc-64f59
Feb 10 13:15:01.475: INFO: Got endpoints: latency-svc-64f59 [1.907361284s]
Feb 10 13:15:01.552: INFO: Created: latency-svc-bxmjm
Feb 10 13:15:01.712: INFO: Created: latency-svc-rsrhb
Feb 10 13:15:01.712: INFO: Got endpoints: latency-svc-bxmjm [1.967162638s]
Feb 10 13:15:01.722: INFO: Got endpoints: latency-svc-rsrhb [1.90155587s]
Feb 10 13:15:01.784: INFO: Created: latency-svc-c6gxc
Feb 10 13:15:01.790: INFO: Got endpoints: latency-svc-c6gxc [1.809858248s]
Feb 10 13:15:01.964: INFO: Created: latency-svc-7hgwh
Feb 10 13:15:01.981: INFO: Got endpoints: latency-svc-7hgwh [1.879417628s]
Feb 10 13:15:02.042: INFO: Created: latency-svc-bj8nh
Feb 10 13:15:02.057: INFO: Got endpoints: latency-svc-bj8nh [1.90359876s]
Feb 10 13:15:02.187: INFO: Created: latency-svc-v2f9g
Feb 10 13:15:02.199: INFO: Got endpoints: latency-svc-v2f9g [1.928570948s]
Feb 10 13:15:02.259: INFO: Created: latency-svc-bmwh7
Feb 10 13:15:02.309: INFO: Got endpoints: latency-svc-bmwh7 [1.987998458s]
Feb 10 13:15:02.344: INFO: Created: latency-svc-fbpqj
Feb 10 13:15:02.352: INFO: Got endpoints: latency-svc-fbpqj [1.840209572s]
Feb 10 13:15:02.486: INFO: Created: latency-svc-w765j
Feb 10 13:15:02.488: INFO: Got endpoints: latency-svc-w765j [1.827342847s]
Feb 10 13:15:02.654: INFO: Created: latency-svc-j5zlk
Feb 10 13:15:02.681: INFO: Got endpoints: latency-svc-j5zlk [1.93975375s]
Feb 10 13:15:02.707: INFO: Created: latency-svc-77qnp
Feb 10 13:15:02.746: INFO: Got endpoints: latency-svc-77qnp [1.879699567s]
Feb 10 13:15:02.746: INFO: Created: latency-svc-bz7l6
Feb 10 13:15:02.857: INFO: Got endpoints: latency-svc-bz7l6 [1.85025766s]
Feb 10 13:15:02.899: INFO: Created: latency-svc-dkpt5
Feb 10 13:15:02.924: INFO: Got endpoints: latency-svc-dkpt5 [1.82090214s]
Feb 10 13:15:03.033: INFO: Created: latency-svc-776dm
Feb 10 13:15:03.033: INFO: Got endpoints: latency-svc-776dm [1.723643639s]
Feb 10 13:15:03.093: INFO: Created: latency-svc-hkfx2
Feb 10 13:15:03.096: INFO: Got endpoints: latency-svc-hkfx2 [1.621386386s]
Feb 10 13:15:03.236: INFO: Created: latency-svc-kkjcx
Feb 10 13:15:03.270: INFO: Got endpoints: latency-svc-kkjcx [1.557060092s]
Feb 10 13:15:03.305: INFO: Created: latency-svc-2stwx
Feb 10 13:15:03.313: INFO: Got endpoints: latency-svc-2stwx [1.59051002s]
Feb 10 13:15:03.416: INFO: Created: latency-svc-qds5q
Feb 10 13:15:03.436: INFO: Got endpoints: latency-svc-qds5q [1.646168528s]
Feb 10 13:15:03.467: INFO: Created: latency-svc-nhmj6
Feb 10 13:15:03.568: INFO: Got endpoints: latency-svc-nhmj6 [1.587002943s]
Feb 10 13:15:03.607: INFO: Created: latency-svc-h2sfc
Feb 10 13:15:03.634: INFO: Got endpoints: latency-svc-h2sfc [1.576331745s]
Feb 10 13:15:03.759: INFO: Created: latency-svc-nw85t
Feb 10 13:15:03.771: INFO: Got endpoints: latency-svc-nw85t [1.57177953s]
Feb 10 13:15:03.858: INFO: Created: latency-svc-gw6tl
Feb 10 13:15:03.929: INFO: Got endpoints: latency-svc-gw6tl [1.620628939s]
Feb 10 13:15:03.984: INFO: Created: latency-svc-mfxp6
Feb 10 13:15:03.998: INFO: Got endpoints: latency-svc-mfxp6 [1.646196752s]
Feb 10 13:15:04.086: INFO: Created: latency-svc-67gvj
Feb 10 13:15:04.103: INFO: Got endpoints: latency-svc-67gvj [1.61484312s]
Feb 10 13:15:04.156: INFO: Created: latency-svc-g2862
Feb 10 13:15:04.281: INFO: Got endpoints: latency-svc-g2862 [1.599008092s]
Feb 10 13:15:04.294: INFO: Created: latency-svc-lh57m
Feb 10 13:15:04.327: INFO: Got endpoints: latency-svc-lh57m [1.581498096s]
Feb 10 13:15:04.373: INFO: Created: latency-svc-z82jk
Feb 10 13:15:04.374: INFO: Got endpoints: latency-svc-z82jk [1.51642941s]
Feb 10 13:15:04.492: INFO: Created: latency-svc-2xmtc
Feb 10 13:15:04.504: INFO: Got endpoints: latency-svc-2xmtc [1.579865718s]
Feb 10 13:15:04.639: INFO: Created: latency-svc-xp8hg
Feb 10 13:15:04.649: INFO: Got endpoints: latency-svc-xp8hg [1.615652187s]
Feb 10 13:15:04.705: INFO: Created: latency-svc-nhcj9
Feb 10 13:15:04.705: INFO: Got endpoints: latency-svc-nhcj9 [1.608082103s]
Feb 10 13:15:04.829: INFO: Created: latency-svc-jh9ll
Feb 10 13:15:04.833: INFO: Got endpoints: latency-svc-jh9ll [1.563203679s]
Feb 10 13:15:04.899: INFO: Created: latency-svc-2xxlr
Feb 10 13:15:04.902: INFO: Got endpoints: latency-svc-2xxlr [1.588813018s]
Feb 10 13:15:05.014: INFO: Created: latency-svc-xtbs9
Feb 10 13:15:05.024: INFO: Got endpoints: latency-svc-xtbs9 [1.588049268s]
Feb 10 13:15:05.069: INFO: Created: latency-svc-wrdm7
Feb 10 13:15:05.082: INFO: Got endpoints: latency-svc-wrdm7 [1.513619459s]
Feb 10 13:15:05.236: INFO: Created: latency-svc-7skdn
Feb 10 13:15:05.247: INFO: Got endpoints: latency-svc-7skdn [1.612599009s]
Feb 10 13:15:05.326: INFO: Created: latency-svc-v7qjs
Feb 10 13:15:05.433: INFO: Got endpoints: latency-svc-v7qjs [1.662013442s]
Feb 10 13:15:05.447: INFO: Created: latency-svc-wvxsw
Feb 10 13:15:05.451: INFO: Got endpoints: latency-svc-wvxsw [1.520691028s]
Feb 10 13:15:05.497: INFO: Created: latency-svc-882pm
Feb 10 13:15:05.514: INFO: Got endpoints: latency-svc-882pm [1.515512376s]
Feb 10 13:15:05.708: INFO: Created: latency-svc-hpjbx
Feb 10 13:15:05.722: INFO: Got endpoints: latency-svc-hpjbx [1.618309699s]
Feb 10 13:15:05.910: INFO: Created: latency-svc-6hh27
Feb 10 13:15:05.918: INFO: Got endpoints: latency-svc-6hh27 [1.637719766s]
Feb 10 13:15:06.061: INFO: Created: latency-svc-wpm4p
Feb 10 13:15:06.076: INFO: Got endpoints: latency-svc-wpm4p [1.74785714s]
Feb 10 13:15:06.135: INFO: Created: latency-svc-k9fsr
Feb 10 13:15:06.135: INFO: Got endpoints: latency-svc-k9fsr [1.760480014s]
Feb 10 13:15:06.229: INFO: Created: latency-svc-xh2k8
Feb 10 13:15:06.232: INFO: Got endpoints: latency-svc-xh2k8 [1.727930643s]
Feb 10 13:15:06.294: INFO: Created: latency-svc-7mz5l
Feb 10 13:15:06.320: INFO: Got endpoints: latency-svc-7mz5l [1.671054584s]
Feb 10 13:15:06.322: INFO: Created: latency-svc-h9dbd
Feb 10 13:15:06.562: INFO: Got endpoints: latency-svc-h9dbd [1.857233454s]
Feb 10 13:15:06.581: INFO: Created: latency-svc-dws2j
Feb 10 13:15:06.609: INFO: Got endpoints: latency-svc-dws2j [1.775541129s]
Feb 10 13:15:06.799: INFO: Created: latency-svc-xmzrt
Feb 10 13:15:06.827: INFO: Got endpoints: latency-svc-xmzrt [1.925122355s]
Feb 10 13:15:06.969: INFO: Created: latency-svc-w4cxq
Feb 10 13:15:07.188: INFO: Got endpoints: latency-svc-w4cxq [2.16391599s]
Feb 10 13:15:07.190: INFO: Created: latency-svc-d8qcp
Feb 10 13:15:07.202: INFO: Got endpoints: latency-svc-d8qcp [2.120101566s]
Feb 10 13:15:07.265: INFO: Created: latency-svc-cgfsz
Feb 10 13:15:07.272: INFO: Got endpoints: latency-svc-cgfsz [2.025295622s]
Feb 10 13:15:07.442: INFO: Created: latency-svc-9wsp7
Feb 10 13:15:07.713: INFO: Got endpoints: latency-svc-9wsp7 [2.279902422s]
Feb 10 13:15:07.724: INFO: Created: latency-svc-wgn4s
Feb 10 13:15:07.733: INFO: Got endpoints: latency-svc-wgn4s [2.281986052s]
Feb 10 13:15:07.801: INFO: Created: latency-svc-tfjjx
Feb 10 13:15:08.018: INFO: Got endpoints: latency-svc-tfjjx [2.504492254s]
Feb 10 13:15:08.092: INFO: Created: latency-svc-gmm26
Feb 10 13:15:08.092: INFO: Got endpoints: latency-svc-gmm26 [2.370397617s]
Feb 10 13:15:08.231: INFO: Created: latency-svc-djfn6
Feb 10 13:15:08.248: INFO: Got endpoints: latency-svc-djfn6 [2.329354672s]
Feb 10 13:15:08.302: INFO: Created: latency-svc-ftpn8
Feb 10 13:15:08.406: INFO: Got endpoints: latency-svc-ftpn8 [2.330307158s]
Feb 10 13:15:08.415: INFO: Created: latency-svc-s9cjw
Feb 10 13:15:08.421: INFO: Got endpoints: latency-svc-s9cjw [2.286072869s]
Feb 10 13:15:08.478: INFO: Created: latency-svc-khclg
Feb 10 13:15:08.498: INFO: Got endpoints: latency-svc-khclg [2.26605264s]
Feb 10 13:15:08.659: INFO: Created: latency-svc-d278q
Feb 10 13:15:08.702: INFO: Got endpoints: latency-svc-d278q [2.381853246s]
Feb 10 13:15:08.877: INFO: Created: latency-svc-5j5nd
Feb 10 13:15:08.886: INFO: Got endpoints: latency-svc-5j5nd [2.32385243s]
Feb 10 13:15:08.991: INFO: Created: latency-svc-zpstr
Feb 10 13:15:09.129: INFO: Got endpoints: latency-svc-zpstr [2.520112794s]
Feb 10 13:15:09.223: INFO: Created: latency-svc-5qcxh
Feb 10 13:15:09.423: INFO: Got endpoints: latency-svc-5qcxh [2.596165902s]
Feb 10 13:15:09.481: INFO: Created: latency-svc-nt6zm
Feb 10 13:15:09.496: INFO: Got endpoints: latency-svc-nt6zm [2.307672092s]
Feb 10 13:15:09.696: INFO: Created: latency-svc-k2n2p
Feb 10 13:15:09.712: INFO: Got endpoints: latency-svc-k2n2p [2.50992013s]
Feb 10 13:15:09.840: INFO: Created: latency-svc-wntmp
Feb 10 13:15:09.840: INFO: Got endpoints: latency-svc-wntmp [2.567122693s]
Feb 10 13:15:09.897: INFO: Created: latency-svc-xg9vf
Feb 10 13:15:10.050: INFO: Got endpoints: latency-svc-xg9vf [2.336546633s]
Feb 10 13:15:10.064: INFO: Created: latency-svc-x25h8
Feb 10 13:15:10.073: INFO: Got endpoints: latency-svc-x25h8 [2.339692395s]
Feb 10 13:15:10.342: INFO: Created: latency-svc-f4c67
Feb 10 13:15:10.352: INFO: Got endpoints: latency-svc-f4c67 [2.333835863s]
Feb 10 13:15:10.601: INFO: Created: latency-svc-648mt
Feb 10 13:15:10.613: INFO: Got endpoints: latency-svc-648mt [2.520532113s]
Feb 10 13:15:10.694: INFO: Created: latency-svc-jp4nw
Feb 10 13:15:10.793: INFO: Got endpoints: latency-svc-jp4nw [2.544673297s]
Feb 10 13:15:10.822: INFO: Created: latency-svc-lkpr4
Feb 10 13:15:10.839: INFO: Got endpoints: latency-svc-lkpr4 [2.43245356s]
Feb 10 13:15:11.099: INFO: Created: latency-svc-955r6
Feb 10 13:15:11.114: INFO: Got endpoints: latency-svc-955r6 [2.693546895s]
Feb 10 13:15:11.117: INFO: Created: latency-svc-zkrsq
Feb 10 13:15:11.248: INFO: Got endpoints: latency-svc-zkrsq [2.749723112s]
Feb 10 13:15:11.252: INFO: Created: latency-svc-rkgtv
Feb 10 13:15:11.340: INFO: Got endpoints: latency-svc-rkgtv [2.638515963s]
Feb 10 13:15:11.341: INFO: Created: latency-svc-5t94m
Feb 10 13:15:11.456: INFO: Got endpoints: latency-svc-5t94m [2.569968603s]
Feb 10 13:15:11.558: INFO: Created: latency-svc-7c56s
Feb 10 13:15:11.558: INFO: Got endpoints: latency-svc-7c56s [2.428666016s]
Feb 10 13:15:11.736: INFO: Created: latency-svc-kww7b
Feb 10 13:15:11.771: INFO: Got endpoints: latency-svc-kww7b [2.347201802s]
Feb 10 13:15:11.927: INFO: Created: latency-svc-74tkc
Feb 10 13:15:11.942: INFO: Got endpoints: latency-svc-74tkc [2.445484803s]
Feb 10 13:15:12.028: INFO: Created: latency-svc-jpwp2
Feb 10 13:15:12.211: INFO: Got endpoints: latency-svc-jpwp2 [2.498494184s]
Feb 10 13:15:12.263: INFO: Created: latency-svc-fln4f
Feb 10 13:15:12.283: INFO: Got endpoints: latency-svc-fln4f [2.443405928s]
Feb 10 13:15:12.449: INFO: Created: latency-svc-lr9n2
Feb 10 13:15:12.462: INFO: Got endpoints: latency-svc-lr9n2 [2.411610577s]
Feb 10 13:15:12.615: INFO: Created: latency-svc-l59k8
Feb 10 13:15:12.624: INFO: Got endpoints: latency-svc-l59k8 [2.550616497s]
Feb 10 13:15:12.693: INFO: Created: latency-svc-sk2qq
Feb 10 13:15:12.767: INFO: Got endpoints: latency-svc-sk2qq [2.414206405s]
Feb 10 13:15:12.808: INFO: Created: latency-svc-n78dz
Feb 10 13:15:12.818: INFO: Got endpoints: latency-svc-n78dz [2.204989532s]
Feb 10 13:15:13.138: INFO: Created: latency-svc-8t2hm
Feb 10 13:15:13.140: INFO: Got endpoints: latency-svc-8t2hm [2.346647988s]
Feb 10 13:15:13.192: INFO: Created: latency-svc-7d9vw
Feb 10 13:15:13.195: INFO: Got endpoints: latency-svc-7d9vw [2.356909206s]
Feb 10 13:15:13.282: INFO: Created: latency-svc-tvm5c
Feb 10 13:15:13.327: INFO: Got endpoints: latency-svc-tvm5c [2.212218126s]
Feb 10 13:15:13.477: INFO: Created: latency-svc-q72vf
Feb 10 13:15:13.514: INFO: Got endpoints: latency-svc-q72vf [2.265637714s]
Feb 10 13:15:13.514: INFO: Created: latency-svc-vsk4z
Feb 10 13:15:13.525: INFO: Got endpoints: latency-svc-vsk4z [2.184478104s]
Feb 10 13:15:13.625: INFO: Created: latency-svc-zjxp5
Feb 10 13:15:13.639: INFO: Got endpoints: latency-svc-zjxp5 [2.183208452s]
Feb 10 13:15:13.695: INFO: Created: latency-svc-9bmvc
Feb 10 13:15:13.697: INFO: Got endpoints: latency-svc-9bmvc [2.138821509s]
Feb 10 13:15:13.834: INFO: Created: latency-svc-lf9sx
Feb 10 13:15:13.849: INFO: Got endpoints: latency-svc-lf9sx [2.0774794s]
Feb 10 13:15:13.891: INFO: Created: latency-svc-trfvk
Feb 10 13:15:14.008: INFO: Got endpoints: latency-svc-trfvk [2.065755513s]
Feb 10 13:15:14.033: INFO: Created: latency-svc-tbxnf
Feb 10 13:15:14.049: INFO: Got endpoints: latency-svc-tbxnf [1.83871501s]
Feb 10 13:15:14.191: INFO: Created: latency-svc-s7dpm
Feb 10 13:15:14.207: INFO: Created: latency-svc-xxg5k
Feb 10 13:15:14.233: INFO: Got endpoints: latency-svc-s7dpm [1.94933128s]
Feb 10 13:15:14.234: INFO: Got endpoints: latency-svc-xxg5k [1.771777127s]
Feb 10 13:15:14.435: INFO: Created: latency-svc-99jkj
Feb 10 13:15:14.449: INFO: Got endpoints: latency-svc-99jkj [1.825481829s]
Feb 10 13:15:14.580: INFO: Created: latency-svc-gdk92
Feb 10 13:15:14.581: INFO: Got endpoints: latency-svc-gdk92 [1.814047059s]
Feb 10 13:15:14.657: INFO: Created: latency-svc-p72kq
Feb 10 13:15:14.747: INFO: Got endpoints: latency-svc-p72kq [1.928911478s]
Feb 10 13:15:14.777: INFO: Created: latency-svc-88pn5
Feb 10 13:15:14.788: INFO: Got endpoints: latency-svc-88pn5 [1.647888912s]
Feb 10 13:15:14.825: INFO: Created: latency-svc-jgwkr
Feb 10 13:15:14.833: INFO: Got endpoints: latency-svc-jgwkr [1.637629005s]
Feb 10 13:15:14.950: INFO: Created: latency-svc-lklwj
Feb 10 13:15:14.950: INFO: Got endpoints: latency-svc-lklwj [1.623634958s]
Feb 10 13:15:14.991: INFO: Created: latency-svc-6kb5c
Feb 10 13:15:15.010: INFO: Got endpoints: latency-svc-6kb5c [1.495766385s]
Feb 10 13:15:15.111: INFO: Created: latency-svc-ct7tg
Feb 10 13:15:15.157: INFO: Created: latency-svc-bnf5w
Feb 10 13:15:15.159: INFO: Got endpoints: latency-svc-ct7tg [1.633520678s]
Feb 10 13:15:15.196: INFO: Got endpoints: latency-svc-bnf5w [1.556152387s]
Feb 10 13:15:15.202: INFO: Created: latency-svc-fqqsh
Feb 10 13:15:15.261: INFO: Got endpoints: latency-svc-fqqsh [1.563798047s]
Feb 10 13:15:15.300: INFO: Created: latency-svc-dtpxq
Feb 10 13:15:15.305: INFO: Got endpoints: latency-svc-dtpxq [1.455908889s]
Feb 10 13:15:15.431: INFO: Created: latency-svc-v9wmg
Feb 10 13:15:15.433: INFO: Got endpoints: latency-svc-v9wmg [1.424639202s]
Feb 10 13:15:15.500: INFO: Created: latency-svc-mws6c
Feb 10 13:15:15.503: INFO: Got endpoints: latency-svc-mws6c [1.452865112s]
Feb 10 13:15:15.672: INFO: Created: latency-svc-28fgj
Feb 10 13:15:15.680: INFO: Got endpoints: latency-svc-28fgj [1.447518722s]
Feb 10 13:15:15.737: INFO: Created: latency-svc-5vqwv
Feb 10 13:15:15.738: INFO: Got endpoints: latency-svc-5vqwv [1.504432161s]
Feb 10 13:15:15.858: INFO: Created: latency-svc-v6db8
Feb 10 13:15:15.883: INFO: Got endpoints: latency-svc-v6db8 [1.43335605s]
Feb 10 13:15:15.919: INFO: Created: latency-svc-b5khw
Feb 10 13:15:15.992: INFO: Got endpoints: latency-svc-b5khw [1.410815557s]
Feb 10 13:15:16.031: INFO: Created: latency-svc-ts5f7
Feb 10 13:15:16.035: INFO: Got endpoints: latency-svc-ts5f7 [1.287913284s]
Feb 10 13:15:16.086: INFO: Created: latency-svc-sncx6
Feb 10 13:15:16.136: INFO: Got endpoints: latency-svc-sncx6 [1.347400042s]
Feb 10 13:15:16.199: INFO: Created: latency-svc-btwdd
Feb 10 13:15:16.202: INFO: Got endpoints: latency-svc-btwdd [1.36907565s]
Feb 10 13:15:16.322: INFO: Created: latency-svc-wk68h
Feb 10 13:15:16.334: INFO: Got endpoints: latency-svc-wk68h [1.383199941s]
Feb 10 13:15:16.407: INFO: Created: latency-svc-m4xwl
Feb 10 13:15:16.410: INFO: Got endpoints: latency-svc-m4xwl [1.400490361s]
Feb 10 13:15:16.609: INFO: Created: latency-svc-78jw4
Feb 10 13:15:16.717: INFO: Got endpoints: latency-svc-78jw4 [1.558431108s]
Feb 10 13:15:16.720: INFO: Created: latency-svc-9xkgw
Feb 10 13:15:16.764: INFO: Got endpoints: latency-svc-9xkgw [1.567592362s]
Feb 10 13:15:16.828: INFO: Created: latency-svc-p7zht
Feb 10 13:15:16.927: INFO: Got endpoints: latency-svc-p7zht [1.665949858s]
Feb 10 13:15:16.939: INFO: Created: latency-svc-pjwfj
Feb 10 13:15:16.979: INFO: Got endpoints: latency-svc-pjwfj [1.67419129s]
Feb 10 13:15:16.992: INFO: Created: latency-svc-jdj92
Feb 10 13:15:17.067: INFO: Got endpoints: latency-svc-jdj92 [1.634167343s]
Feb 10 13:15:17.101: INFO: Created: latency-svc-w2pds
Feb 10 13:15:17.137: INFO: Got endpoints: latency-svc-w2pds [1.63433547s]
Feb 10 13:15:17.225: INFO: Created: latency-svc-7fx2f
Feb 10 13:15:17.274: INFO: Got endpoints: latency-svc-7fx2f [1.593791608s]
Feb 10 13:15:17.285: INFO: Created: latency-svc-phlrh
Feb 10 13:15:17.289: INFO: Got endpoints: latency-svc-phlrh [1.55027157s]
Feb 10 13:15:17.428: INFO: Created: latency-svc-cjltx
Feb 10 13:15:17.432: INFO: Got endpoints: latency-svc-cjltx [1.549376528s]
Feb 10 13:15:17.485: INFO: Created: latency-svc-j7k8z
Feb 10 13:15:17.601: INFO: Created: latency-svc-bj2kv
Feb 10 13:15:17.615: INFO: Got endpoints: latency-svc-j7k8z [1.62296248s]
Feb 10 13:15:17.652: INFO: Got endpoints: latency-svc-bj2kv [1.617126462s]
Feb 10 13:15:17.698: INFO: Created: latency-svc-dh5sb
Feb 10 13:15:17.827: INFO: Got endpoints: latency-svc-dh5sb [1.691401051s]
Feb 10 13:15:17.833: INFO: Created: latency-svc-jgp9s
Feb 10 13:15:17.981: INFO: Created: latency-svc-pkcl7
Feb 10 13:15:17.982: INFO: Got endpoints: latency-svc-jgp9s [1.77943082s]
Feb 10 13:15:18.010: INFO: Got endpoints: latency-svc-pkcl7 [1.67602118s]
Feb 10 13:15:18.079: INFO: Created: latency-svc-fcdsq
Feb 10 13:15:18.704: INFO: Got endpoints: latency-svc-fcdsq [2.293867023s]
Feb 10 13:15:18.757: INFO: Created: latency-svc-8p2kf
Feb 10 13:15:18.761: INFO: Got endpoints: latency-svc-8p2kf [2.043463077s]
Feb 10 13:15:18.876: INFO: Created: latency-svc-kxfz2
Feb 10 13:15:18.883: INFO: Got endpoints: latency-svc-kxfz2 [2.118938175s]
Feb 10 13:15:18.931: INFO: Created: latency-svc-gpc8c
Feb 10 13:15:18.968: INFO: Got endpoints: latency-svc-gpc8c [2.040524308s]
Feb 10 13:15:19.091: INFO: Created: latency-svc-9pr4l
Feb 10 13:15:19.104: INFO: Got endpoints: latency-svc-9pr4l [2.124652912s]
Feb 10 13:15:19.316: INFO: Created: latency-svc-4xbll
Feb 10 13:15:19.370: INFO: Got endpoints: latency-svc-4xbll [2.302012674s]
Feb 10 13:15:19.370: INFO: Created: latency-svc-9hj7n
Feb 10 13:15:19.387: INFO: Got endpoints: latency-svc-9hj7n [2.24943496s]
Feb 10 13:15:19.387: INFO: Latencies: [129.389534ms 211.014383ms 328.371952ms 395.669437ms 546.024409ms 645.669181ms 689.443538ms 844.27896ms 899.310928ms 1.042097348s 1.20031772s 1.255486762s 1.287913284s 1.347400042s 1.36907565s 1.383199941s 1.400490361s 1.410815557s 1.414256591s 1.424639202s 1.43335605s 1.447518722s 1.452865112s 1.455908889s 1.475253371s 1.487916033s 1.495766385s 1.504432161s 1.513619459s 1.515512376s 1.51642941s 1.518977206s 1.520691028s 1.549376528s 1.55027157s 1.556152387s 1.557060092s 1.558431108s 1.559251377s 1.563203679s 1.563798047s 1.566881517s 1.567592362s 1.57177953s 1.576331745s 1.579865718s 1.581498096s 1.587002943s 1.588049268s 1.588813018s 1.59051002s 1.593791608s 1.599008092s 1.608082103s 1.612599009s 1.61484312s 1.615652187s 1.617126462s 1.618309699s 1.620628939s 1.621386386s 1.62296248s 1.623634958s 1.627231993s 1.633520678s 1.634167343s 1.63433547s 1.637629005s 1.637719766s 1.645951506s 1.646168528s 1.646196752s 1.647888912s 1.662013442s 1.665949858s 1.671054584s 1.67419129s 1.67602118s 1.691401051s 1.700444681s 1.702235369s 1.708804173s 1.723643639s 1.727930643s 1.73316061s 1.742757331s 1.74785714s 1.749365015s 1.760480014s 1.771777127s 1.772096133s 1.775541129s 1.77943082s 1.809858248s 1.814047059s 1.82090214s 1.824425032s 1.825481829s 1.827342847s 1.834806489s 1.837066333s 1.83871501s 1.840209572s 1.842640246s 1.845564263s 1.85025766s 1.855939047s 1.857233454s 1.87050441s 1.879417628s 1.879699567s 1.885392616s 1.90155587s 1.90359876s 1.907361284s 1.914645352s 1.925122355s 1.928570948s 1.928911478s 1.93975375s 1.943895211s 1.94933128s 1.957113095s 1.959611472s 1.961911078s 1.967162638s 1.972349709s 1.974095771s 1.987998458s 1.997146877s 2.002263891s 2.025295622s 2.040524308s 2.042761446s 2.043463077s 2.065755513s 2.0774794s 2.118938175s 2.120101566s 2.124652912s 2.138821509s 2.16391599s 2.183208452s 2.184478104s 2.204989532s 2.212218126s 2.24943496s 2.265637714s 2.26605264s 2.279902422s 2.281986052s 2.286072869s 2.293867023s 2.302012674s 2.307672092s 2.32385243s 2.329354672s 2.330307158s 2.333835863s 2.336546633s 2.339692395s 2.346647988s 2.347201802s 2.35502762s 2.356909206s 2.370397617s 2.381853246s 2.411610577s 2.414206405s 2.422778103s 2.428666016s 2.43245356s 2.443405928s 2.445484803s 2.461846344s 2.498494184s 2.504492254s 2.50949276s 2.50992013s 2.520112794s 2.520532113s 2.52468721s 2.544673297s 2.550616497s 2.557137341s 2.558081884s 2.567122693s 2.569968603s 2.587743558s 2.596165902s 2.597066584s 2.609631231s 2.638515963s 2.693546895s 2.713396219s 2.72747419s 2.729917624s 2.734779931s 2.749723112s 2.763532497s]
Feb 10 13:15:19.387: INFO: 50 %ile: 1.837066333s
Feb 10 13:15:19.387: INFO: 90 %ile: 2.520532113s
Feb 10 13:15:19.387: INFO: 99 %ile: 2.749723112s
Feb 10 13:15:19.387: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:15:19.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5899" for this suite.
Feb 10 13:15:55.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:15:55.653: INFO: namespace svc-latency-5899 deletion completed in 36.259014336s

• [SLOW TEST:70.614 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:15:55.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b83e23c1-de8c-45ee-9dc1-a72fb133dc20
STEP: Creating a pod to test consume secrets
Feb 10 13:15:55.766: INFO: Waiting up to 5m0s for pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501" in namespace "secrets-6004" to be "success or failure"
Feb 10 13:15:55.783: INFO: Pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501": Phase="Pending", Reason="", readiness=false. Elapsed: 17.205413ms
Feb 10 13:15:57.795: INFO: Pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029487286s
Feb 10 13:15:59.802: INFO: Pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036868234s
Feb 10 13:16:01.817: INFO: Pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051243504s
Feb 10 13:16:03.839: INFO: Pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073083824s
STEP: Saw pod success
Feb 10 13:16:03.839: INFO: Pod "pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501" satisfied condition "success or failure"
Feb 10 13:16:03.853: INFO: Trying to get logs from node iruya-node pod pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501 container secret-env-test: 
STEP: delete the pod
Feb 10 13:16:03.944: INFO: Waiting for pod pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501 to disappear
Feb 10 13:16:04.082: INFO: Pod pod-secrets-6d38819f-41dd-4fb6-a232-bf7cfb4f1501 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:16:04.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6004" for this suite.
Feb 10 13:16:10.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:16:10.241: INFO: namespace secrets-6004 deletion completed in 6.150849715s

• [SLOW TEST:14.587 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:16:10.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 10 13:16:10.444: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:16:23.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8991" for this suite.
Feb 10 13:16:29.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:16:29.799: INFO: namespace init-container-8991 deletion completed in 6.16716423s

• [SLOW TEST:19.558 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:16:29.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-dd9d966d-bb13-42c8-b439-623c3d7c74f7
STEP: Creating a pod to test consume secrets
Feb 10 13:16:29.946: INFO: Waiting up to 5m0s for pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5" in namespace "secrets-1343" to be "success or failure"
Feb 10 13:16:30.022: INFO: Pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 76.713017ms
Feb 10 13:16:32.030: INFO: Pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084833365s
Feb 10 13:16:34.045: INFO: Pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099636213s
Feb 10 13:16:36.055: INFO: Pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109622637s
Feb 10 13:16:38.065: INFO: Pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119318774s
STEP: Saw pod success
Feb 10 13:16:38.065: INFO: Pod "pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5" satisfied condition "success or failure"
Feb 10 13:16:38.069: INFO: Trying to get logs from node iruya-node pod pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5 container secret-volume-test: 
STEP: delete the pod
Feb 10 13:16:38.131: INFO: Waiting for pod pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5 to disappear
Feb 10 13:16:38.227: INFO: Pod pod-secrets-f8cf77b2-0adb-4379-b25c-eb942172f2e5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:16:38.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1343" for this suite.
Feb 10 13:16:44.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:16:44.390: INFO: namespace secrets-1343 deletion completed in 6.153872453s

• [SLOW TEST:14.591 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:16:44.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 10 13:16:44.555: INFO: Number of nodes with available pods: 0
Feb 10 13:16:44.555: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:45.574: INFO: Number of nodes with available pods: 0
Feb 10 13:16:45.574: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:46.592: INFO: Number of nodes with available pods: 0
Feb 10 13:16:46.592: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:47.592: INFO: Number of nodes with available pods: 0
Feb 10 13:16:47.593: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:48.578: INFO: Number of nodes with available pods: 0
Feb 10 13:16:48.578: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:49.567: INFO: Number of nodes with available pods: 0
Feb 10 13:16:49.567: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:51.743: INFO: Number of nodes with available pods: 0
Feb 10 13:16:51.743: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:53.735: INFO: Number of nodes with available pods: 0
Feb 10 13:16:53.736: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:16:54.585: INFO: Number of nodes with available pods: 1
Feb 10 13:16:54.585: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:16:55.572: INFO: Number of nodes with available pods: 1
Feb 10 13:16:55.572: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:16:56.587: INFO: Number of nodes with available pods: 2
Feb 10 13:16:56.587: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 10 13:16:56.652: INFO: Number of nodes with available pods: 2
Feb 10 13:16:56.652: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7646, will wait for the garbage collector to delete the pods
Feb 10 13:16:57.794: INFO: Deleting DaemonSet.extensions daemon-set took: 26.347787ms
Feb 10 13:16:58.096: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.951449ms
Feb 10 13:17:06.612: INFO: Number of nodes with available pods: 0
Feb 10 13:17:06.612: INFO: Number of running nodes: 0, number of available pods: 0
Feb 10 13:17:06.663: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7646/daemonsets","resourceVersion":"23820903"},"items":null}

Feb 10 13:17:06.667: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7646/pods","resourceVersion":"23820903"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:17:06.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7646" for this suite.
Feb 10 13:17:14.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:17:14.855: INFO: namespace daemonsets-7646 deletion completed in 8.17409034s

• [SLOW TEST:30.464 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:17:14.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:17:14.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f" in namespace "projected-4270" to be "success or failure"
Feb 10 13:17:14.992: INFO: Pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.299934ms
Feb 10 13:17:16.998: INFO: Pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015636158s
Feb 10 13:17:19.011: INFO: Pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028714647s
Feb 10 13:17:21.022: INFO: Pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040210585s
Feb 10 13:17:23.028: INFO: Pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045921032s
STEP: Saw pod success
Feb 10 13:17:23.028: INFO: Pod "downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f" satisfied condition "success or failure"
Feb 10 13:17:23.030: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f container client-container: 
STEP: delete the pod
Feb 10 13:17:23.144: INFO: Waiting for pod downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f to disappear
Feb 10 13:17:23.154: INFO: Pod downwardapi-volume-af701b64-dea9-4398-bb3e-2361b917be6f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:17:23.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4270" for this suite.
Feb 10 13:17:29.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:17:29.314: INFO: namespace projected-4270 deletion completed in 6.153314801s

• [SLOW TEST:14.458 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:17:29.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:17:29.442: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 10 13:17:29.463: INFO: Number of nodes with available pods: 0
Feb 10 13:17:29.463: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 10 13:17:29.557: INFO: Number of nodes with available pods: 0
Feb 10 13:17:29.557: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:30.571: INFO: Number of nodes with available pods: 0
Feb 10 13:17:30.571: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:31.567: INFO: Number of nodes with available pods: 0
Feb 10 13:17:31.567: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:32.568: INFO: Number of nodes with available pods: 0
Feb 10 13:17:32.568: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:33.563: INFO: Number of nodes with available pods: 0
Feb 10 13:17:33.563: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:34.567: INFO: Number of nodes with available pods: 0
Feb 10 13:17:34.567: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:35.565: INFO: Number of nodes with available pods: 0
Feb 10 13:17:35.565: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:36.796: INFO: Number of nodes with available pods: 0
Feb 10 13:17:36.796: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:37.569: INFO: Number of nodes with available pods: 1
Feb 10 13:17:37.569: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 10 13:17:37.623: INFO: Number of nodes with available pods: 1
Feb 10 13:17:37.623: INFO: Number of running nodes: 0, number of available pods: 1
Feb 10 13:17:38.648: INFO: Number of nodes with available pods: 0
Feb 10 13:17:38.648: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 10 13:17:38.676: INFO: Number of nodes with available pods: 0
Feb 10 13:17:38.676: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:39.688: INFO: Number of nodes with available pods: 0
Feb 10 13:17:39.688: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:40.734: INFO: Number of nodes with available pods: 0
Feb 10 13:17:40.734: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:41.690: INFO: Number of nodes with available pods: 0
Feb 10 13:17:41.690: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:42.681: INFO: Number of nodes with available pods: 0
Feb 10 13:17:42.681: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:43.682: INFO: Number of nodes with available pods: 0
Feb 10 13:17:43.682: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:44.850: INFO: Number of nodes with available pods: 0
Feb 10 13:17:44.850: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:45.688: INFO: Number of nodes with available pods: 0
Feb 10 13:17:45.688: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:46.685: INFO: Number of nodes with available pods: 0
Feb 10 13:17:46.685: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:47.687: INFO: Number of nodes with available pods: 0
Feb 10 13:17:47.687: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:48.689: INFO: Number of nodes with available pods: 0
Feb 10 13:17:48.689: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:49.684: INFO: Number of nodes with available pods: 0
Feb 10 13:17:49.684: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:17:50.707: INFO: Number of nodes with available pods: 1
Feb 10 13:17:50.707: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8001, will wait for the garbage collector to delete the pods
Feb 10 13:17:50.793: INFO: Deleting DaemonSet.extensions daemon-set took: 20.279423ms
Feb 10 13:17:51.094: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.618943ms
Feb 10 13:18:06.605: INFO: Number of nodes with available pods: 0
Feb 10 13:18:06.605: INFO: Number of running nodes: 0, number of available pods: 0
Feb 10 13:18:06.622: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8001/daemonsets","resourceVersion":"23821079"},"items":null}

Feb 10 13:18:06.629: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8001/pods","resourceVersion":"23821079"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:18:06.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8001" for this suite.
Feb 10 13:18:12.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:18:13.104: INFO: namespace daemonsets-8001 deletion completed in 6.375759101s

• [SLOW TEST:43.790 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:18:13.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-de2b0d80-9a23-4faf-ae34-7d53986c3e17
STEP: Creating a pod to test consume secrets
Feb 10 13:18:13.243: INFO: Waiting up to 5m0s for pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725" in namespace "secrets-4827" to be "success or failure"
Feb 10 13:18:13.254: INFO: Pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725": Phase="Pending", Reason="", readiness=false. Elapsed: 11.364084ms
Feb 10 13:18:15.313: INFO: Pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069586648s
Feb 10 13:18:17.366: INFO: Pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123362848s
Feb 10 13:18:19.373: INFO: Pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130084628s
Feb 10 13:18:21.381: INFO: Pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138291861s
STEP: Saw pod success
Feb 10 13:18:21.381: INFO: Pod "pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725" satisfied condition "success or failure"
Feb 10 13:18:21.385: INFO: Trying to get logs from node iruya-node pod pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725 container secret-volume-test: 
STEP: delete the pod
Feb 10 13:18:21.467: INFO: Waiting for pod pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725 to disappear
Feb 10 13:18:21.479: INFO: Pod pod-secrets-cd564250-1c15-4f6d-b60e-64413354d725 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:18:21.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4827" for this suite.
Feb 10 13:18:27.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:18:27.742: INFO: namespace secrets-4827 deletion completed in 6.257034737s

• [SLOW TEST:14.638 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:18:27.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3613
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3613
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3613
Feb 10 13:18:27.885: INFO: Found 0 stateful pods, waiting for 1
Feb 10 13:18:37.899: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 10 13:18:37.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 13:18:38.432: INFO: stderr: "I0210 13:18:38.091343     215 log.go:172] (0xc000a42420) (0xc0009c05a0) Create stream\nI0210 13:18:38.091395     215 log.go:172] (0xc000a42420) (0xc0009c05a0) Stream added, broadcasting: 1\nI0210 13:18:38.101502     215 log.go:172] (0xc000a42420) Reply frame received for 1\nI0210 13:18:38.101540     215 log.go:172] (0xc000a42420) (0xc0009c06e0) Create stream\nI0210 13:18:38.101551     215 log.go:172] (0xc000a42420) (0xc0009c06e0) Stream added, broadcasting: 3\nI0210 13:18:38.104585     215 log.go:172] (0xc000a42420) Reply frame received for 3\nI0210 13:18:38.104695     215 log.go:172] (0xc000a42420) (0xc0009c0780) Create stream\nI0210 13:18:38.104708     215 log.go:172] (0xc000a42420) (0xc0009c0780) Stream added, broadcasting: 5\nI0210 13:18:38.107288     215 log.go:172] (0xc000a42420) Reply frame received for 5\nI0210 13:18:38.260935     215 log.go:172] (0xc000a42420) Data frame received for 5\nI0210 13:18:38.260980     215 log.go:172] (0xc0009c0780) (5) Data frame handling\nI0210 13:18:38.260994     215 log.go:172] (0xc0009c0780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 13:18:38.317041     215 log.go:172] (0xc000a42420) Data frame received for 3\nI0210 13:18:38.317126     215 log.go:172] (0xc0009c06e0) (3) Data frame handling\nI0210 13:18:38.317160     215 log.go:172] (0xc0009c06e0) (3) Data frame sent\nI0210 13:18:38.424066     215 log.go:172] (0xc000a42420) (0xc0009c06e0) Stream removed, broadcasting: 3\nI0210 13:18:38.424272     215 log.go:172] (0xc000a42420) Data frame received for 1\nI0210 13:18:38.424397     215 log.go:172] (0xc000a42420) (0xc0009c0780) Stream removed, broadcasting: 5\nI0210 13:18:38.424476     215 log.go:172] (0xc0009c05a0) (1) Data frame handling\nI0210 13:18:38.424499     215 log.go:172] (0xc0009c05a0) (1) Data frame sent\nI0210 13:18:38.424509     215 log.go:172] (0xc000a42420) (0xc0009c05a0) Stream removed, broadcasting: 1\nI0210 13:18:38.424536     215 log.go:172] (0xc000a42420) Go away received\nI0210 13:18:38.425259     215 log.go:172] (0xc000a42420) (0xc0009c05a0) Stream removed, broadcasting: 1\nI0210 13:18:38.425304     215 log.go:172] (0xc000a42420) (0xc0009c06e0) Stream removed, broadcasting: 3\nI0210 13:18:38.425323     215 log.go:172] (0xc000a42420) (0xc0009c0780) Stream removed, broadcasting: 5\n"
Feb 10 13:18:38.432: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 13:18:38.432: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 13:18:38.443: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 10 13:18:48.458: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 13:18:48.458: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 13:18:48.585: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999687s
Feb 10 13:18:49.597: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.888892798s
Feb 10 13:18:50.615: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.876804238s
Feb 10 13:18:51.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.858621926s
Feb 10 13:18:52.640: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.844780733s
Feb 10 13:18:53.691: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.833044363s
Feb 10 13:18:54.898: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.782403315s
Feb 10 13:18:55.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.575033469s
Feb 10 13:18:56.986: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.497431657s
Feb 10 13:18:57.999: INFO: Verifying statefulset ss doesn't scale past 1 for another 488.266502ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3613
Feb 10 13:18:59.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 13:18:59.547: INFO: stderr: "I0210 13:18:59.224993     234 log.go:172] (0xc0003aa630) (0xc0007aabe0) Create stream\nI0210 13:18:59.225090     234 log.go:172] (0xc0003aa630) (0xc0007aabe0) Stream added, broadcasting: 1\nI0210 13:18:59.231673     234 log.go:172] (0xc0003aa630) Reply frame received for 1\nI0210 13:18:59.231730     234 log.go:172] (0xc0003aa630) (0xc000972000) Create stream\nI0210 13:18:59.231760     234 log.go:172] (0xc0003aa630) (0xc000972000) Stream added, broadcasting: 3\nI0210 13:18:59.234815     234 log.go:172] (0xc0003aa630) Reply frame received for 3\nI0210 13:18:59.234898     234 log.go:172] (0xc0003aa630) (0xc0008a6000) Create stream\nI0210 13:18:59.234920     234 log.go:172] (0xc0003aa630) (0xc0008a6000) Stream added, broadcasting: 5\nI0210 13:18:59.237612     234 log.go:172] (0xc0003aa630) Reply frame received for 5\nI0210 13:18:59.377548     234 log.go:172] (0xc0003aa630) Data frame received for 5\nI0210 13:18:59.377660     234 log.go:172] (0xc0008a6000) (5) Data frame handling\nI0210 13:18:59.377686     234 log.go:172] (0xc0008a6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 13:18:59.378723     234 log.go:172] (0xc0003aa630) Data frame received for 3\nI0210 13:18:59.378759     234 log.go:172] (0xc000972000) (3) Data frame handling\nI0210 13:18:59.378772     234 log.go:172] (0xc000972000) (3) Data frame sent\nI0210 13:18:59.538583     234 log.go:172] (0xc0003aa630) Data frame received for 1\nI0210 13:18:59.538690     234 log.go:172] (0xc0003aa630) (0xc000972000) Stream removed, broadcasting: 3\nI0210 13:18:59.538751     234 log.go:172] (0xc0007aabe0) (1) Data frame handling\nI0210 13:18:59.538765     234 log.go:172] (0xc0007aabe0) (1) Data frame sent\nI0210 13:18:59.538805     234 log.go:172] (0xc0003aa630) (0xc0008a6000) Stream removed, broadcasting: 5\nI0210 13:18:59.538907     234 log.go:172] (0xc0003aa630) (0xc0007aabe0) Stream removed, broadcasting: 1\nI0210 13:18:59.538926     234 log.go:172] (0xc0003aa630) Go away received\nI0210 13:18:59.539385     234 log.go:172] (0xc0003aa630) (0xc0007aabe0) Stream removed, broadcasting: 1\nI0210 13:18:59.539394     234 log.go:172] (0xc0003aa630) (0xc000972000) Stream removed, broadcasting: 3\nI0210 13:18:59.539398     234 log.go:172] (0xc0003aa630) (0xc0008a6000) Stream removed, broadcasting: 5\n"
Feb 10 13:18:59.547: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 13:18:59.547: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 13:18:59.554: INFO: Found 1 stateful pods, waiting for 3
Feb 10 13:19:09.567: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 13:19:09.567: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 13:19:09.567: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 10 13:19:19.573: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 13:19:19.573: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 13:19:19.573: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 10 13:19:19.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 13:19:20.326: INFO: stderr: "I0210 13:19:19.802531     255 log.go:172] (0xc0009c0420) (0xc000932640) Create stream\nI0210 13:19:19.802801     255 log.go:172] (0xc0009c0420) (0xc000932640) Stream added, broadcasting: 1\nI0210 13:19:19.811956     255 log.go:172] (0xc0009c0420) Reply frame received for 1\nI0210 13:19:19.812021     255 log.go:172] (0xc0009c0420) (0xc00065e140) Create stream\nI0210 13:19:19.812031     255 log.go:172] (0xc0009c0420) (0xc00065e140) Stream added, broadcasting: 3\nI0210 13:19:19.818278     255 log.go:172] (0xc0009c0420) Reply frame received for 3\nI0210 13:19:19.818301     255 log.go:172] (0xc0009c0420) (0xc0007ac000) Create stream\nI0210 13:19:19.818313     255 log.go:172] (0xc0009c0420) (0xc0007ac000) Stream added, broadcasting: 5\nI0210 13:19:19.820464     255 log.go:172] (0xc0009c0420) Reply frame received for 5\nI0210 13:19:20.091880     255 log.go:172] (0xc0009c0420) Data frame received for 3\nI0210 13:19:20.092199     255 log.go:172] (0xc00065e140) (3) Data frame handling\nI0210 13:19:20.092246     255 log.go:172] (0xc00065e140) (3) Data frame sent\nI0210 13:19:20.092299     255 log.go:172] (0xc0009c0420) Data frame received for 5\nI0210 13:19:20.092310     255 log.go:172] (0xc0007ac000) (5) Data frame handling\nI0210 13:19:20.092327     255 log.go:172] (0xc0007ac000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 13:19:20.319470     255 log.go:172] (0xc0009c0420) (0xc00065e140) Stream removed, broadcasting: 3\nI0210 13:19:20.319598     255 log.go:172] (0xc0009c0420) Data frame received for 1\nI0210 13:19:20.319708     255 log.go:172] (0xc0009c0420) (0xc0007ac000) Stream removed, broadcasting: 5\nI0210 13:19:20.319751     255 log.go:172] (0xc000932640) (1) Data frame handling\nI0210 13:19:20.319792     255 log.go:172] (0xc000932640) (1) Data frame sent\nI0210 13:19:20.319804     255 log.go:172] (0xc0009c0420) (0xc000932640) Stream removed, broadcasting: 1\nI0210 13:19:20.319816     255 log.go:172] (0xc0009c0420) Go away received\nI0210 13:19:20.320532     255 log.go:172] (0xc0009c0420) (0xc000932640) Stream removed, broadcasting: 1\nI0210 13:19:20.320544     255 log.go:172] (0xc0009c0420) (0xc00065e140) Stream removed, broadcasting: 3\nI0210 13:19:20.320550     255 log.go:172] (0xc0009c0420) (0xc0007ac000) Stream removed, broadcasting: 5\n"
Feb 10 13:19:20.326: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 13:19:20.326: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 13:19:20.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 13:19:20.729: INFO: stderr: "I0210 13:19:20.513056     272 log.go:172] (0xc00090c0b0) (0xc000888640) Create stream\nI0210 13:19:20.513135     272 log.go:172] (0xc00090c0b0) (0xc000888640) Stream added, broadcasting: 1\nI0210 13:19:20.516493     272 log.go:172] (0xc00090c0b0) Reply frame received for 1\nI0210 13:19:20.516510     272 log.go:172] (0xc00090c0b0) (0xc000858000) Create stream\nI0210 13:19:20.516516     272 log.go:172] (0xc00090c0b0) (0xc000858000) Stream added, broadcasting: 3\nI0210 13:19:20.517523     272 log.go:172] (0xc00090c0b0) Reply frame received for 3\nI0210 13:19:20.517547     272 log.go:172] (0xc00090c0b0) (0xc0005b6320) Create stream\nI0210 13:19:20.517560     272 log.go:172] (0xc00090c0b0) (0xc0005b6320) Stream added, broadcasting: 5\nI0210 13:19:20.518676     272 log.go:172] (0xc00090c0b0) Reply frame received for 5\nI0210 13:19:20.601437     272 log.go:172] (0xc00090c0b0) Data frame received for 5\nI0210 13:19:20.601480     272 log.go:172] (0xc0005b6320) (5) Data frame handling\nI0210 13:19:20.601501     272 log.go:172] (0xc0005b6320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 13:19:20.633674     272 log.go:172] (0xc00090c0b0) Data frame received for 3\nI0210 13:19:20.633725     272 log.go:172] (0xc000858000) (3) Data frame handling\nI0210 13:19:20.633733     272 log.go:172] (0xc000858000) (3) Data frame sent\nI0210 13:19:20.724299     272 log.go:172] (0xc00090c0b0) Data frame received for 1\nI0210 13:19:20.724535     272 log.go:172] (0xc000888640) (1) Data frame handling\nI0210 13:19:20.724586     272 log.go:172] (0xc000888640) (1) Data frame sent\nI0210 13:19:20.725312     272 log.go:172] (0xc00090c0b0) (0xc0005b6320) Stream removed, broadcasting: 5\nI0210 13:19:20.725361     272 log.go:172] (0xc00090c0b0) (0xc000888640) Stream removed, broadcasting: 1\nI0210 13:19:20.725584     272 log.go:172] (0xc00090c0b0) (0xc000858000) Stream removed, broadcasting: 3\nI0210 13:19:20.725690     272 log.go:172] (0xc00090c0b0) Go away received\nI0210 13:19:20.725901     272 log.go:172] (0xc00090c0b0) (0xc000888640) Stream removed, broadcasting: 1\nI0210 13:19:20.726026     272 log.go:172] (0xc00090c0b0) (0xc000858000) Stream removed, broadcasting: 3\nI0210 13:19:20.726043     272 log.go:172] (0xc00090c0b0) (0xc0005b6320) Stream removed, broadcasting: 5\n"
Feb 10 13:19:20.729: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 13:19:20.729: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 13:19:20.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 13:19:21.282: INFO: stderr: "I0210 13:19:20.983433     289 log.go:172] (0xc000660420) (0xc00076e640) Create stream\nI0210 13:19:20.983491     289 log.go:172] (0xc000660420) (0xc00076e640) Stream added, broadcasting: 1\nI0210 13:19:20.989204     289 log.go:172] (0xc000660420) Reply frame received for 1\nI0210 13:19:20.989245     289 log.go:172] (0xc000660420) (0xc000860000) Create stream\nI0210 13:19:20.989261     289 log.go:172] (0xc000660420) (0xc000860000) Stream added, broadcasting: 3\nI0210 13:19:20.990536     289 log.go:172] (0xc000660420) Reply frame received for 3\nI0210 13:19:20.990587     289 log.go:172] (0xc000660420) (0xc0005d6140) Create stream\nI0210 13:19:20.990602     289 log.go:172] (0xc000660420) (0xc0005d6140) Stream added, broadcasting: 5\nI0210 13:19:20.991671     289 log.go:172] (0xc000660420) Reply frame received for 5\nI0210 13:19:21.103302     289 log.go:172] (0xc000660420) Data frame received for 5\nI0210 13:19:21.103337     289 log.go:172] (0xc0005d6140) (5) Data frame handling\nI0210 13:19:21.103358     289 log.go:172] (0xc0005d6140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 13:19:21.152456     289 log.go:172] (0xc000660420) Data frame received for 3\nI0210 13:19:21.152480     289 log.go:172] (0xc000860000) (3) Data frame handling\nI0210 13:19:21.152520     289 log.go:172] (0xc000860000) (3) Data frame sent\nI0210 13:19:21.275801     289 log.go:172] (0xc000660420) (0xc000860000) Stream removed, broadcasting: 3\nI0210 13:19:21.276152     289 log.go:172] (0xc000660420) Data frame received for 1\nI0210 13:19:21.276171     289 log.go:172] (0xc00076e640) (1) Data frame handling\nI0210 13:19:21.276180     289 log.go:172] (0xc00076e640) (1) Data frame sent\nI0210 13:19:21.276366     289 log.go:172] (0xc000660420) (0xc00076e640) Stream removed, broadcasting: 1\nI0210 13:19:21.276749     289 log.go:172] (0xc000660420) (0xc0005d6140) Stream removed, broadcasting: 5\nI0210 13:19:21.276792     289 log.go:172] (0xc000660420) Go away received\nI0210 13:19:21.276915     289 log.go:172] (0xc000660420) (0xc00076e640) Stream removed, broadcasting: 1\nI0210 13:19:21.276968     289 log.go:172] (0xc000660420) (0xc000860000) Stream removed, broadcasting: 3\nI0210 13:19:21.276986     289 log.go:172] (0xc000660420) (0xc0005d6140) Stream removed, broadcasting: 5\n"
Feb 10 13:19:21.282: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 13:19:21.282: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 13:19:21.282: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 13:19:21.290: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 10 13:19:31.306: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 13:19:31.306: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 13:19:31.306: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 13:19:31.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999551s
Feb 10 13:19:32.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.923515631s
Feb 10 13:19:33.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.911303911s
Feb 10 13:19:34.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.886188595s
Feb 10 13:19:35.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.872173103s
Feb 10 13:19:36.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.455487083s
Feb 10 13:19:37.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.442519865s
Feb 10 13:19:38.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.429921757s
Feb 10 13:19:39.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.411167196s
Feb 10 13:19:40.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 393.341895ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3613
Feb 10 13:19:41.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 13:19:42.439: INFO: stderr: "I0210 13:19:42.140393     305 log.go:172] (0xc0008b0370) (0xc000814640) Create stream\nI0210 13:19:42.140460     305 log.go:172] (0xc0008b0370) (0xc000814640) Stream added, broadcasting: 1\nI0210 13:19:42.147226     305 log.go:172] (0xc0008b0370) Reply frame received for 1\nI0210 13:19:42.147276     305 log.go:172] (0xc0008b0370) (0xc000966000) Create stream\nI0210 13:19:42.147285     305 log.go:172] (0xc0008b0370) (0xc000966000) Stream added, broadcasting: 3\nI0210 13:19:42.148534     305 log.go:172] (0xc0008b0370) Reply frame received for 3\nI0210 13:19:42.148557     305 log.go:172] (0xc0008b0370) (0xc00051c1e0) Create stream\nI0210 13:19:42.148564     305 log.go:172] (0xc0008b0370) (0xc00051c1e0) Stream added, broadcasting: 5\nI0210 13:19:42.149480     305 log.go:172] (0xc0008b0370) Reply frame received for 5\nI0210 13:19:42.277135     305 log.go:172] (0xc0008b0370) Data frame received for 5\nI0210 13:19:42.277224     305 log.go:172] (0xc00051c1e0) (5) Data frame handling\nI0210 13:19:42.277241     305 log.go:172] (0xc00051c1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 13:19:42.277267     305 log.go:172] (0xc0008b0370) Data frame received for 3\nI0210 13:19:42.277275     305 log.go:172] (0xc000966000) (3) Data frame handling\nI0210 13:19:42.277283     305 log.go:172] (0xc000966000) (3) Data frame sent\nI0210 13:19:42.428015     305 log.go:172] (0xc0008b0370) Data frame received for 1\nI0210 13:19:42.428168     305 log.go:172] (0xc000814640) (1) Data frame handling\nI0210 13:19:42.428190     305 log.go:172] (0xc000814640) (1) Data frame sent\nI0210 13:19:42.429826     305 log.go:172] (0xc0008b0370) (0xc000814640) Stream removed, broadcasting: 1\nI0210 13:19:42.429991     305 log.go:172] (0xc0008b0370) (0xc000966000) Stream removed, broadcasting: 3\nI0210 13:19:42.430384     305 log.go:172] (0xc0008b0370) (0xc00051c1e0) Stream removed, broadcasting: 5\nI0210 13:19:42.430400     305 log.go:172] (0xc0008b0370) Go away received\nI0210 13:19:42.430730     305 log.go:172] (0xc0008b0370) (0xc000814640) Stream removed, broadcasting: 1\nI0210 13:19:42.430751     305 log.go:172] (0xc0008b0370) (0xc000966000) Stream removed, broadcasting: 3\nI0210 13:19:42.430755     305 log.go:172] (0xc0008b0370) (0xc00051c1e0) Stream removed, broadcasting: 5\n"
Feb 10 13:19:42.439: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 13:19:42.439: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 13:19:42.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 13:19:42.874: INFO: stderr: "I0210 13:19:42.644738     323 log.go:172] (0xc00095a370) (0xc000600780) Create stream\nI0210 13:19:42.644972     323 log.go:172] (0xc00095a370) (0xc000600780) Stream added, broadcasting: 1\nI0210 13:19:42.649027     323 log.go:172] (0xc00095a370) Reply frame received for 1\nI0210 13:19:42.649059     323 log.go:172] (0xc00095a370) (0xc00084c000) Create stream\nI0210 13:19:42.649103     323 log.go:172] (0xc00095a370) (0xc00084c000) Stream added, broadcasting: 3\nI0210 13:19:42.650290     323 log.go:172] (0xc00095a370) Reply frame received for 3\nI0210 13:19:42.650319     323 log.go:172] (0xc00095a370) (0xc000600820) Create stream\nI0210 13:19:42.650331     323 log.go:172] (0xc00095a370) (0xc000600820) Stream added, broadcasting: 5\nI0210 13:19:42.651189     323 log.go:172] (0xc00095a370) Reply frame received for 5\nI0210 13:19:42.761474     323 log.go:172] (0xc00095a370) Data frame received for 3\nI0210 13:19:42.761855     323 log.go:172] (0xc00084c000) (3) Data frame handling\nI0210 13:19:42.761910     323 log.go:172] (0xc00084c000) (3) Data frame sent\nI0210 13:19:42.762465     323 log.go:172] (0xc00095a370) Data frame received for 5\nI0210 13:19:42.762511     323 log.go:172] (0xc000600820) (5) Data frame handling\nI0210 13:19:42.762592     323 log.go:172] (0xc000600820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 13:19:42.869110     323 log.go:172] (0xc00095a370) Data frame received for 1\nI0210 13:19:42.869248     323 log.go:172] (0xc00095a370) (0xc00084c000) Stream removed, broadcasting: 3\nI0210 13:19:42.869278     323 log.go:172] (0xc00095a370) (0xc000600820) Stream removed, broadcasting: 5\nI0210 13:19:42.869346     323 log.go:172] (0xc000600780) (1) Data frame handling\nI0210 13:19:42.869368     323 log.go:172] (0xc000600780) (1) Data frame sent\nI0210 13:19:42.869377     323 log.go:172] (0xc00095a370) (0xc000600780) Stream removed, broadcasting: 1\nI0210 13:19:42.869388     323 log.go:172] (0xc00095a370) Go away received\nI0210 13:19:42.869673     323 log.go:172] (0xc00095a370) (0xc000600780) Stream removed, broadcasting: 1\nI0210 13:19:42.869689     323 log.go:172] (0xc00095a370) (0xc00084c000) Stream removed, broadcasting: 3\nI0210 13:19:42.869699     323 log.go:172] (0xc00095a370) (0xc000600820) Stream removed, broadcasting: 5\n"
Feb 10 13:19:42.874: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 13:19:42.874: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 13:19:42.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3613 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 13:19:43.609: INFO: stderr: "I0210 13:19:43.099856     341 log.go:172] (0xc0008f60b0) (0xc00085a640) Create stream\nI0210 13:19:43.100071     341 log.go:172] (0xc0008f60b0) (0xc00085a640) Stream added, broadcasting: 1\nI0210 13:19:43.106237     341 log.go:172] (0xc0008f60b0) Reply frame received for 1\nI0210 13:19:43.106425     341 log.go:172] (0xc0008f60b0) (0xc00099e000) Create stream\nI0210 13:19:43.106441     341 log.go:172] (0xc0008f60b0) (0xc00099e000) Stream added, broadcasting: 3\nI0210 13:19:43.108292     341 log.go:172] (0xc0008f60b0) Reply frame received for 3\nI0210 13:19:43.108326     341 log.go:172] (0xc0008f60b0) (0xc0001f4320) Create stream\nI0210 13:19:43.108336     341 log.go:172] (0xc0008f60b0) (0xc0001f4320) Stream added, broadcasting: 5\nI0210 13:19:43.109293     341 log.go:172] (0xc0008f60b0) Reply frame received for 5\nI0210 13:19:43.468757     341 log.go:172] (0xc0008f60b0) Data frame received for 5\nI0210 13:19:43.468826     341 log.go:172] (0xc0001f4320) (5) Data frame handling\nI0210 13:19:43.468856     341 log.go:172] (0xc0001f4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 13:19:43.468885     341 log.go:172] (0xc0008f60b0) Data frame received for 3\nI0210 13:19:43.468894     341 log.go:172] (0xc00099e000) (3) Data frame handling\nI0210 13:19:43.468912     341 log.go:172] (0xc00099e000) (3) Data frame sent\nI0210 13:19:43.602510     341 log.go:172] (0xc0008f60b0) (0xc00099e000) Stream removed, broadcasting: 3\nI0210 13:19:43.602657     341 log.go:172] (0xc0008f60b0) Data frame received for 1\nI0210 13:19:43.602671     341 log.go:172] (0xc00085a640) (1) Data frame handling\nI0210 13:19:43.602683     341 log.go:172] (0xc00085a640) (1) Data frame sent\nI0210 13:19:43.602690     341 log.go:172] (0xc0008f60b0) (0xc00085a640) Stream removed, broadcasting: 1\nI0210 13:19:43.602998     341 log.go:172] (0xc0008f60b0) (0xc0001f4320) Stream removed, broadcasting: 5\nI0210 13:19:43.603036     341 log.go:172] (0xc0008f60b0) Go away received\nI0210 13:19:43.603270     341 log.go:172] (0xc0008f60b0) (0xc00085a640) Stream removed, broadcasting: 1\nI0210 13:19:43.603286     341 log.go:172] (0xc0008f60b0) (0xc00099e000) Stream removed, broadcasting: 3\nI0210 13:19:43.603292     341 log.go:172] (0xc0008f60b0) (0xc0001f4320) Stream removed, broadcasting: 5\n"
Feb 10 13:19:43.609: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 13:19:43.609: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 13:19:43.609: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 10 13:20:13.678: INFO: Deleting all statefulset in ns statefulset-3613
Feb 10 13:20:13.685: INFO: Scaling statefulset ss to 0
Feb 10 13:20:13.705: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 13:20:13.711: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:20:13.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3613" for this suite.
Feb 10 13:20:19.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:20:19.978: INFO: namespace statefulset-3613 deletion completed in 6.227379453s

• [SLOW TEST:112.236 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:20:19.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 10 13:20:20.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2436'
Feb 10 13:20:22.284: INFO: stderr: ""
Feb 10 13:20:22.285: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 10 13:20:22.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2436'
Feb 10 13:20:22.428: INFO: stderr: ""
Feb 10 13:20:22.428: INFO: stdout: "update-demo-nautilus-gjh5l update-demo-nautilus-ht5j4 "
Feb 10 13:20:22.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjh5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:20:22.531: INFO: stderr: ""
Feb 10 13:20:22.531: INFO: stdout: ""
Feb 10 13:20:22.531: INFO: update-demo-nautilus-gjh5l is created but not running
Feb 10 13:20:27.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2436'
Feb 10 13:20:27.685: INFO: stderr: ""
Feb 10 13:20:27.685: INFO: stdout: "update-demo-nautilus-gjh5l update-demo-nautilus-ht5j4 "
Feb 10 13:20:27.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjh5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:20:28.888: INFO: stderr: ""
Feb 10 13:20:28.888: INFO: stdout: ""
Feb 10 13:20:28.888: INFO: update-demo-nautilus-gjh5l is created but not running
Feb 10 13:20:33.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2436'
Feb 10 13:20:34.030: INFO: stderr: ""
Feb 10 13:20:34.030: INFO: stdout: "update-demo-nautilus-gjh5l update-demo-nautilus-ht5j4 "
Feb 10 13:20:34.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjh5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:20:34.121: INFO: stderr: ""
Feb 10 13:20:34.121: INFO: stdout: "true"
Feb 10 13:20:34.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjh5l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:20:34.197: INFO: stderr: ""
Feb 10 13:20:34.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:20:34.197: INFO: validating pod update-demo-nautilus-gjh5l
Feb 10 13:20:34.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:20:34.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:20:34.206: INFO: update-demo-nautilus-gjh5l is verified up and running
Feb 10 13:20:34.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ht5j4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:20:34.296: INFO: stderr: ""
Feb 10 13:20:34.296: INFO: stdout: "true"
Feb 10 13:20:34.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ht5j4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:20:34.382: INFO: stderr: ""
Feb 10 13:20:34.382: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:20:34.382: INFO: validating pod update-demo-nautilus-ht5j4
Feb 10 13:20:34.436: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:20:34.436: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:20:34.436: INFO: update-demo-nautilus-ht5j4 is verified up and running
STEP: rolling-update to new replication controller
Feb 10 13:20:34.439: INFO: scanned /root for discovery docs: 
Feb 10 13:20:34.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2436'
Feb 10 13:21:05.116: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 10 13:21:05.116: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 10 13:21:05.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2436'
Feb 10 13:21:05.257: INFO: stderr: ""
Feb 10 13:21:05.257: INFO: stdout: "update-demo-kitten-9fq7t update-demo-kitten-x54mn "
Feb 10 13:21:05.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9fq7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:21:05.390: INFO: stderr: ""
Feb 10 13:21:05.390: INFO: stdout: "true"
Feb 10 13:21:05.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9fq7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:21:05.489: INFO: stderr: ""
Feb 10 13:21:05.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 10 13:21:05.489: INFO: validating pod update-demo-kitten-9fq7t
Feb 10 13:21:05.517: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 10 13:21:05.517: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 10 13:21:05.517: INFO: update-demo-kitten-9fq7t is verified up and running
Feb 10 13:21:05.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x54mn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:21:05.636: INFO: stderr: ""
Feb 10 13:21:05.636: INFO: stdout: "true"
Feb 10 13:21:05.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x54mn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2436'
Feb 10 13:21:05.753: INFO: stderr: ""
Feb 10 13:21:05.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 10 13:21:05.753: INFO: validating pod update-demo-kitten-x54mn
Feb 10 13:21:05.770: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 10 13:21:05.770: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 10 13:21:05.770: INFO: update-demo-kitten-x54mn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:21:05.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2436" for this suite.
Feb 10 13:21:31.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:21:31.961: INFO: namespace kubectl-2436 deletion completed in 26.145953646s

• [SLOW TEST:71.982 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:21:31.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:21:32.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8" in namespace "downward-api-6782" to be "success or failure"
Feb 10 13:21:32.127: INFO: Pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.803398ms
Feb 10 13:21:34.137: INFO: Pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031891561s
Feb 10 13:21:36.144: INFO: Pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038710085s
Feb 10 13:21:38.157: INFO: Pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052232262s
Feb 10 13:21:40.166: INFO: Pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060585212s
STEP: Saw pod success
Feb 10 13:21:40.166: INFO: Pod "downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8" satisfied condition "success or failure"
Feb 10 13:21:40.170: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8 container client-container: 
STEP: delete the pod
Feb 10 13:21:40.228: INFO: Waiting for pod downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8 to disappear
Feb 10 13:21:40.233: INFO: Pod downwardapi-volume-03a661f9-f78d-4d33-9d1e-4c8e051ee5e8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:21:40.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6782" for this suite.
Feb 10 13:21:46.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:21:47.147: INFO: namespace downward-api-6782 deletion completed in 6.910408065s

• [SLOW TEST:15.186 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:21:47.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-3d7c8ba6-97af-4e90-978a-a477aafc60f9 in namespace container-probe-9113
Feb 10 13:21:55.402: INFO: Started pod busybox-3d7c8ba6-97af-4e90-978a-a477aafc60f9 in namespace container-probe-9113
STEP: checking the pod's current state and verifying that restartCount is present
Feb 10 13:21:55.405: INFO: Initial restart count of pod busybox-3d7c8ba6-97af-4e90-978a-a477aafc60f9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:25:57.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9113" for this suite.
Feb 10 13:26:03.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:26:03.316: INFO: namespace container-probe-9113 deletion completed in 6.169357422s

• [SLOW TEST:256.168 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:26:03.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 10 13:26:03.431: INFO: Waiting up to 5m0s for pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679" in namespace "emptydir-2329" to be "success or failure"
Feb 10 13:26:03.439: INFO: Pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179271ms
Feb 10 13:26:05.447: INFO: Pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015976262s
Feb 10 13:26:07.455: INFO: Pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024177169s
Feb 10 13:26:09.462: INFO: Pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030355782s
Feb 10 13:26:11.469: INFO: Pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038175707s
STEP: Saw pod success
Feb 10 13:26:11.469: INFO: Pod "pod-c2fb3b49-f035-4960-a8ac-30736bdf5679" satisfied condition "success or failure"
Feb 10 13:26:11.474: INFO: Trying to get logs from node iruya-node pod pod-c2fb3b49-f035-4960-a8ac-30736bdf5679 container test-container: 
STEP: delete the pod
Feb 10 13:26:11.620: INFO: Waiting for pod pod-c2fb3b49-f035-4960-a8ac-30736bdf5679 to disappear
Feb 10 13:26:11.710: INFO: Pod pod-c2fb3b49-f035-4960-a8ac-30736bdf5679 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:26:11.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2329" for this suite.
Feb 10 13:26:17.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:26:17.951: INFO: namespace emptydir-2329 deletion completed in 6.223913898s

• [SLOW TEST:14.635 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:26:17.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4469.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4469.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4469.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4469.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 10 13:26:30.137: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df: the server could not find the requested resource (get pods dns-test-aab13d34-4663-4072-9b04-2716853ec5df)
Feb 10 13:26:30.140: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df: the server could not find the requested resource (get pods dns-test-aab13d34-4663-4072-9b04-2716853ec5df)
Feb 10 13:26:30.146: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-4469.svc.cluster.local from pod dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df: the server could not find the requested resource (get pods dns-test-aab13d34-4663-4072-9b04-2716853ec5df)
Feb 10 13:26:30.154: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df: the server could not find the requested resource (get pods dns-test-aab13d34-4663-4072-9b04-2716853ec5df)
Feb 10 13:26:30.160: INFO: Unable to read jessie_udp@PodARecord from pod dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df: the server could not find the requested resource (get pods dns-test-aab13d34-4663-4072-9b04-2716853ec5df)
Feb 10 13:26:30.165: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df: the server could not find the requested resource (get pods dns-test-aab13d34-4663-4072-9b04-2716853ec5df)
Feb 10 13:26:30.165: INFO: Lookups using dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-4469.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 10 13:26:35.263: INFO: DNS probes using dns-4469/dns-test-aab13d34-4663-4072-9b04-2716853ec5df succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:26:35.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4469" for this suite.
Feb 10 13:26:41.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:26:41.727: INFO: namespace dns-4469 deletion completed in 6.393799015s

• [SLOW TEST:23.776 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:26:41.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 10 13:26:41.818: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:26:57.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-940" for this suite.
Feb 10 13:27:03.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:27:03.695: INFO: namespace pods-940 deletion completed in 6.181393792s

• [SLOW TEST:21.967 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:27:03.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:28:05.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4856" for this suite.
Feb 10 13:28:27.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:28:27.496: INFO: namespace container-probe-4856 deletion completed in 22.178727601s

• [SLOW TEST:83.801 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:28:27.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 10 13:28:40.440: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:28:41.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-339" for this suite.
Feb 10 13:29:05.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:29:05.792: INFO: namespace replicaset-339 deletion completed in 24.159324739s

• [SLOW TEST:38.295 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:29:05.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 10 13:29:14.512: INFO: Successfully updated pod "labelsupdate1f826aa7-aef4-4d78-8821-98a61f0b54a4"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:29:16.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7164" for this suite.
Feb 10 13:29:38.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:29:38.747: INFO: namespace downward-api-7164 deletion completed in 22.130550018s

• [SLOW TEST:32.954 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:29:38.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 10 13:29:45.970: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:29:46.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5154" for this suite.
Feb 10 13:29:52.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:29:52.682: INFO: namespace container-runtime-5154 deletion completed in 6.655862564s

• [SLOW TEST:13.936 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:29:52.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bbf3fe0d-c066-4f52-b94b-0826f98040c4
STEP: Creating a pod to test consume configMaps
Feb 10 13:29:52.866: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181" in namespace "configmap-636" to be "success or failure"
Feb 10 13:29:52.883: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181": Phase="Pending", Reason="", readiness=false. Elapsed: 16.956725ms
Feb 10 13:29:55.373: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506658501s
Feb 10 13:29:57.383: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516703593s
Feb 10 13:29:59.398: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531897291s
Feb 10 13:30:01.406: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53995335s
Feb 10 13:30:03.416: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.549922057s
STEP: Saw pod success
Feb 10 13:30:03.416: INFO: Pod "pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181" satisfied condition "success or failure"
Feb 10 13:30:03.420: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181 container configmap-volume-test: 
STEP: delete the pod
Feb 10 13:30:03.547: INFO: Waiting for pod pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181 to disappear
Feb 10 13:30:03.557: INFO: Pod pod-configmaps-a9accfd7-35ca-489e-afe3-0e2c40fd1181 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:30:03.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-636" for this suite.
Feb 10 13:30:09.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:30:09.751: INFO: namespace configmap-636 deletion completed in 6.18739717s

• [SLOW TEST:17.068 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:30:09.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-84fc7f53-6d05-4c2a-ae02-8ee94db819cc
STEP: Creating a pod to test consume configMaps
Feb 10 13:30:09.894: INFO: Waiting up to 5m0s for pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7" in namespace "configmap-840" to be "success or failure"
Feb 10 13:30:09.920: INFO: Pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.085929ms
Feb 10 13:30:11.964: INFO: Pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070774039s
Feb 10 13:30:13.973: INFO: Pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079798757s
Feb 10 13:30:15.986: INFO: Pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092178434s
Feb 10 13:30:17.998: INFO: Pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104421173s
STEP: Saw pod success
Feb 10 13:30:17.998: INFO: Pod "pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7" satisfied condition "success or failure"
Feb 10 13:30:18.001: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7 container configmap-volume-test: 
STEP: delete the pod
Feb 10 13:30:18.076: INFO: Waiting for pod pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7 to disappear
Feb 10 13:30:18.096: INFO: Pod pod-configmaps-e89b414e-3790-45f3-9d9e-75422d519bf7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:30:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-840" for this suite.
Feb 10 13:30:24.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:30:24.351: INFO: namespace configmap-840 deletion completed in 6.249911657s

• [SLOW TEST:14.599 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:30:24.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:30:24.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 10 13:30:24.608: INFO: stderr: ""
Feb 10 13:30:24.608: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:30:24.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2444" for this suite.
Feb 10 13:30:30.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:30:30.750: INFO: namespace kubectl-2444 deletion completed in 6.135186843s

• [SLOW TEST:6.398 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:30:30.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 13:30:31.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-51'
Feb 10 13:30:34.222: INFO: stderr: ""
Feb 10 13:30:34.222: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 10 13:30:44.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-51 -o json'
Feb 10 13:30:44.390: INFO: stderr: ""
Feb 10 13:30:44.390: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-10T13:30:34Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-51\",\n        \"resourceVersion\": \"23822768\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-51/pods/e2e-test-nginx-pod\",\n        \"uid\": \"0a01f7a9-56fc-4f63-b707-1a33e4d93731\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ncltj\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ncltj\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ncltj\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-10T13:30:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-10T13:30:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-10T13:30:42Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-10T13:30:34Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://891bd0977f2001396aca13b1cc34cabbf9fd9d8cd6ccd52d92b0ffd4aec644e7\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-10T13:30:41Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-10T13:30:34Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 10 13:30:44.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-51'
Feb 10 13:30:44.750: INFO: stderr: ""
Feb 10 13:30:44.750: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 10 13:30:44.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-51'
Feb 10 13:30:51.050: INFO: stderr: ""
Feb 10 13:30:51.050: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:30:51.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-51" for this suite.
Feb 10 13:30:57.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:30:57.188: INFO: namespace kubectl-51 deletion completed in 6.124815723s

• [SLOW TEST:26.438 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:30:57.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 10 13:30:57.386: INFO: Number of nodes with available pods: 0
Feb 10 13:30:57.386: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:30:59.229: INFO: Number of nodes with available pods: 0
Feb 10 13:30:59.229: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:30:59.677: INFO: Number of nodes with available pods: 0
Feb 10 13:30:59.677: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:00.407: INFO: Number of nodes with available pods: 0
Feb 10 13:31:00.407: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:01.438: INFO: Number of nodes with available pods: 0
Feb 10 13:31:01.438: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:02.399: INFO: Number of nodes with available pods: 0
Feb 10 13:31:02.399: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:04.562: INFO: Number of nodes with available pods: 0
Feb 10 13:31:04.562: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:05.401: INFO: Number of nodes with available pods: 0
Feb 10 13:31:05.401: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:06.478: INFO: Number of nodes with available pods: 0
Feb 10 13:31:06.478: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:07.401: INFO: Number of nodes with available pods: 0
Feb 10 13:31:07.401: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:08.423: INFO: Number of nodes with available pods: 2
Feb 10 13:31:08.423: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 10 13:31:08.472: INFO: Number of nodes with available pods: 1
Feb 10 13:31:08.472: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:09.484: INFO: Number of nodes with available pods: 1
Feb 10 13:31:09.484: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:10.492: INFO: Number of nodes with available pods: 1
Feb 10 13:31:10.492: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:11.488: INFO: Number of nodes with available pods: 1
Feb 10 13:31:11.488: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:12.497: INFO: Number of nodes with available pods: 1
Feb 10 13:31:12.497: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:13.490: INFO: Number of nodes with available pods: 1
Feb 10 13:31:13.490: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:14.489: INFO: Number of nodes with available pods: 1
Feb 10 13:31:14.489: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:15.485: INFO: Number of nodes with available pods: 1
Feb 10 13:31:15.485: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:16.542: INFO: Number of nodes with available pods: 1
Feb 10 13:31:16.542: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:17.484: INFO: Number of nodes with available pods: 1
Feb 10 13:31:17.484: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:18.513: INFO: Number of nodes with available pods: 1
Feb 10 13:31:18.513: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:19.488: INFO: Number of nodes with available pods: 1
Feb 10 13:31:19.488: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:20.493: INFO: Number of nodes with available pods: 1
Feb 10 13:31:20.493: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:31:21.485: INFO: Number of nodes with available pods: 2
Feb 10 13:31:21.485: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1626, will wait for the garbage collector to delete the pods
Feb 10 13:31:21.548: INFO: Deleting DaemonSet.extensions daemon-set took: 8.340022ms
Feb 10 13:31:21.849: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.629238ms
Feb 10 13:31:38.106: INFO: Number of nodes with available pods: 0
Feb 10 13:31:38.106: INFO: Number of running nodes: 0, number of available pods: 0
Feb 10 13:31:38.110: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1626/daemonsets","resourceVersion":"23822918"},"items":null}

Feb 10 13:31:38.114: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1626/pods","resourceVersion":"23822918"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:31:38.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1626" for this suite.
Feb 10 13:31:44.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:31:44.265: INFO: namespace daemonsets-1626 deletion completed in 6.108205284s

• [SLOW TEST:47.077 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:31:44.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:31:44.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe" in namespace "downward-api-126" to be "success or failure"
Feb 10 13:31:44.517: INFO: Pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547844ms
Feb 10 13:31:46.530: INFO: Pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021185985s
Feb 10 13:31:48.546: INFO: Pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037052333s
Feb 10 13:31:50.560: INFO: Pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051432759s
Feb 10 13:31:52.569: INFO: Pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060065126s
STEP: Saw pod success
Feb 10 13:31:52.569: INFO: Pod "downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe" satisfied condition "success or failure"
Feb 10 13:31:52.573: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe container client-container: 
STEP: delete the pod
Feb 10 13:31:52.742: INFO: Waiting for pod downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe to disappear
Feb 10 13:31:52.751: INFO: Pod downwardapi-volume-8f205a04-d8c3-4959-a6e1-7b5e7f321ebe no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:31:52.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-126" for this suite.
Feb 10 13:31:58.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:31:58.975: INFO: namespace downward-api-126 deletion completed in 6.220188531s

• [SLOW TEST:14.709 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:31:58.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-852597aa-7c11-4bfd-8c3f-d18067ad18f1
STEP: Creating a pod to test consume secrets
Feb 10 13:31:59.117: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38" in namespace "projected-40" to be "success or failure"
Feb 10 13:31:59.141: INFO: Pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38": Phase="Pending", Reason="", readiness=false. Elapsed: 23.914422ms
Feb 10 13:32:01.154: INFO: Pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037227426s
Feb 10 13:32:03.163: INFO: Pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045535404s
Feb 10 13:32:05.173: INFO: Pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055773352s
Feb 10 13:32:07.178: INFO: Pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061381689s
STEP: Saw pod success
Feb 10 13:32:07.179: INFO: Pod "pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38" satisfied condition "success or failure"
Feb 10 13:32:07.181: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38 container projected-secret-volume-test: 
STEP: delete the pod
Feb 10 13:32:07.260: INFO: Waiting for pod pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38 to disappear
Feb 10 13:32:07.268: INFO: Pod pod-projected-secrets-ff16c50c-0026-49ba-8506-e3af53e71e38 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:32:07.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-40" for this suite.
Feb 10 13:32:13.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:32:13.577: INFO: namespace projected-40 deletion completed in 6.297457487s

• [SLOW TEST:14.601 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:32:13.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 in namespace container-probe-8446
Feb 10 13:32:21.829: INFO: Started pod liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 in namespace container-probe-8446
STEP: checking the pod's current state and verifying that restartCount is present
Feb 10 13:32:21.835: INFO: Initial restart count of pod liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 is 0
Feb 10 13:32:36.298: INFO: Restart count of pod container-probe-8446/liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 is now 1 (14.462616458s elapsed)
Feb 10 13:33:00.568: INFO: Restart count of pod container-probe-8446/liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 is now 2 (38.732504108s elapsed)
Feb 10 13:33:18.695: INFO: Restart count of pod container-probe-8446/liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 is now 3 (56.859985878s elapsed)
Feb 10 13:33:36.805: INFO: Restart count of pod container-probe-8446/liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 is now 4 (1m14.969670134s elapsed)
Feb 10 13:34:49.167: INFO: Restart count of pod container-probe-8446/liveness-a17ca1fe-696e-48f7-9ade-86723496c8d1 is now 5 (2m27.331422194s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:34:49.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8446" for this suite.
Feb 10 13:34:55.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:34:55.351: INFO: namespace container-probe-8446 deletion completed in 6.143866303s

• [SLOW TEST:161.773 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:34:55.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:34:55.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e" in namespace "projected-6784" to be "success or failure"
Feb 10 13:34:55.479: INFO: Pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457561ms
Feb 10 13:34:57.488: INFO: Pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013271629s
Feb 10 13:34:59.607: INFO: Pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13229092s
Feb 10 13:35:01.615: INFO: Pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14047043s
Feb 10 13:35:03.629: INFO: Pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154833029s
STEP: Saw pod success
Feb 10 13:35:03.629: INFO: Pod "downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e" satisfied condition "success or failure"
Feb 10 13:35:03.635: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e container client-container: 
STEP: delete the pod
Feb 10 13:35:03.739: INFO: Waiting for pod downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e to disappear
Feb 10 13:35:03.811: INFO: Pod downwardapi-volume-2245b1a0-c07e-4e00-ad2a-43eccb1b253e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:35:03.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6784" for this suite.
Feb 10 13:35:09.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:35:12.041: INFO: namespace projected-6784 deletion completed in 8.221988001s

• [SLOW TEST:16.690 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:35:12.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 10 13:35:32.239: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:32.239: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:32.330208       8 log.go:172] (0xc000d20630) (0xc0029e5360) Create stream
I0210 13:35:32.330249       8 log.go:172] (0xc000d20630) (0xc0029e5360) Stream added, broadcasting: 1
I0210 13:35:32.339534       8 log.go:172] (0xc000d20630) Reply frame received for 1
I0210 13:35:32.339612       8 log.go:172] (0xc000d20630) (0xc0029e54a0) Create stream
I0210 13:35:32.339630       8 log.go:172] (0xc000d20630) (0xc0029e54a0) Stream added, broadcasting: 3
I0210 13:35:32.341527       8 log.go:172] (0xc000d20630) Reply frame received for 3
I0210 13:35:32.341562       8 log.go:172] (0xc000d20630) (0xc0029e5540) Create stream
I0210 13:35:32.341575       8 log.go:172] (0xc000d20630) (0xc0029e5540) Stream added, broadcasting: 5
I0210 13:35:32.344858       8 log.go:172] (0xc000d20630) Reply frame received for 5
I0210 13:35:32.489689       8 log.go:172] (0xc000d20630) Data frame received for 3
I0210 13:35:32.489746       8 log.go:172] (0xc0029e54a0) (3) Data frame handling
I0210 13:35:32.489781       8 log.go:172] (0xc0029e54a0) (3) Data frame sent
I0210 13:35:32.873208       8 log.go:172] (0xc000d20630) (0xc0029e54a0) Stream removed, broadcasting: 3
I0210 13:35:32.873329       8 log.go:172] (0xc000d20630) Data frame received for 1
I0210 13:35:32.873343       8 log.go:172] (0xc0029e5360) (1) Data frame handling
I0210 13:35:32.873389       8 log.go:172] (0xc0029e5360) (1) Data frame sent
I0210 13:35:32.873414       8 log.go:172] (0xc000d20630) (0xc0029e5360) Stream removed, broadcasting: 1
I0210 13:35:32.873460       8 log.go:172] (0xc000d20630) (0xc0029e5540) Stream removed, broadcasting: 5
I0210 13:35:32.873519       8 log.go:172] (0xc000d20630) Go away received
I0210 13:35:32.873600       8 log.go:172] (0xc000d20630) (0xc0029e5360) Stream removed, broadcasting: 1
I0210 13:35:32.873617       8 log.go:172] (0xc000d20630) (0xc0029e54a0) Stream removed, broadcasting: 3
I0210 13:35:32.873627       8 log.go:172] (0xc000d20630) (0xc0029e5540) Stream removed, broadcasting: 5
Feb 10 13:35:32.873: INFO: Exec stderr: ""
Feb 10 13:35:32.873: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:32.873: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:32.960626       8 log.go:172] (0xc00209b8c0) (0xc0028ae500) Create stream
I0210 13:35:32.960877       8 log.go:172] (0xc00209b8c0) (0xc0028ae500) Stream added, broadcasting: 1
I0210 13:35:32.974771       8 log.go:172] (0xc00209b8c0) Reply frame received for 1
I0210 13:35:32.974830       8 log.go:172] (0xc00209b8c0) (0xc00152ea00) Create stream
I0210 13:35:32.974843       8 log.go:172] (0xc00209b8c0) (0xc00152ea00) Stream added, broadcasting: 3
I0210 13:35:32.977757       8 log.go:172] (0xc00209b8c0) Reply frame received for 3
I0210 13:35:32.977854       8 log.go:172] (0xc00209b8c0) (0xc0029e55e0) Create stream
I0210 13:35:32.977868       8 log.go:172] (0xc00209b8c0) (0xc0029e55e0) Stream added, broadcasting: 5
I0210 13:35:32.981227       8 log.go:172] (0xc00209b8c0) Reply frame received for 5
I0210 13:35:33.085307       8 log.go:172] (0xc00209b8c0) Data frame received for 3
I0210 13:35:33.085369       8 log.go:172] (0xc00152ea00) (3) Data frame handling
I0210 13:35:33.085389       8 log.go:172] (0xc00152ea00) (3) Data frame sent
I0210 13:35:33.204678       8 log.go:172] (0xc00209b8c0) Data frame received for 1
I0210 13:35:33.204704       8 log.go:172] (0xc0028ae500) (1) Data frame handling
I0210 13:35:33.204723       8 log.go:172] (0xc0028ae500) (1) Data frame sent
I0210 13:35:33.204738       8 log.go:172] (0xc00209b8c0) (0xc0028ae500) Stream removed, broadcasting: 1
I0210 13:35:33.204976       8 log.go:172] (0xc00209b8c0) (0xc0029e55e0) Stream removed, broadcasting: 5
I0210 13:35:33.205168       8 log.go:172] (0xc00209b8c0) (0xc00152ea00) Stream removed, broadcasting: 3
I0210 13:35:33.205320       8 log.go:172] (0xc00209b8c0) Go away received
I0210 13:35:33.205389       8 log.go:172] (0xc00209b8c0) (0xc0028ae500) Stream removed, broadcasting: 1
I0210 13:35:33.205405       8 log.go:172] (0xc00209b8c0) (0xc00152ea00) Stream removed, broadcasting: 3
I0210 13:35:33.205409       8 log.go:172] (0xc00209b8c0) (0xc0029e55e0) Stream removed, broadcasting: 5
Feb 10 13:35:33.205: INFO: Exec stderr: ""
Feb 10 13:35:33.205: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:33.205: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:33.267320       8 log.go:172] (0xc00202cb00) (0xc001a9a3c0) Create stream
I0210 13:35:33.267554       8 log.go:172] (0xc00202cb00) (0xc001a9a3c0) Stream added, broadcasting: 1
I0210 13:35:33.276486       8 log.go:172] (0xc00202cb00) Reply frame received for 1
I0210 13:35:33.276519       8 log.go:172] (0xc00202cb00) (0xc0028ae5a0) Create stream
I0210 13:35:33.276529       8 log.go:172] (0xc00202cb00) (0xc0028ae5a0) Stream added, broadcasting: 3
I0210 13:35:33.278010       8 log.go:172] (0xc00202cb00) Reply frame received for 3
I0210 13:35:33.278042       8 log.go:172] (0xc00202cb00) (0xc00152eb40) Create stream
I0210 13:35:33.278055       8 log.go:172] (0xc00202cb00) (0xc00152eb40) Stream added, broadcasting: 5
I0210 13:35:33.279982       8 log.go:172] (0xc00202cb00) Reply frame received for 5
I0210 13:35:33.377371       8 log.go:172] (0xc00202cb00) Data frame received for 3
I0210 13:35:33.377400       8 log.go:172] (0xc0028ae5a0) (3) Data frame handling
I0210 13:35:33.377423       8 log.go:172] (0xc0028ae5a0) (3) Data frame sent
I0210 13:35:33.501886       8 log.go:172] (0xc00202cb00) (0xc0028ae5a0) Stream removed, broadcasting: 3
I0210 13:35:33.501999       8 log.go:172] (0xc00202cb00) Data frame received for 1
I0210 13:35:33.502034       8 log.go:172] (0xc00202cb00) (0xc00152eb40) Stream removed, broadcasting: 5
I0210 13:35:33.502087       8 log.go:172] (0xc001a9a3c0) (1) Data frame handling
I0210 13:35:33.502112       8 log.go:172] (0xc001a9a3c0) (1) Data frame sent
I0210 13:35:33.502128       8 log.go:172] (0xc00202cb00) (0xc001a9a3c0) Stream removed, broadcasting: 1
I0210 13:35:33.502150       8 log.go:172] (0xc00202cb00) Go away received
I0210 13:35:33.502345       8 log.go:172] (0xc00202cb00) (0xc001a9a3c0) Stream removed, broadcasting: 1
I0210 13:35:33.502381       8 log.go:172] (0xc00202cb00) (0xc0028ae5a0) Stream removed, broadcasting: 3
I0210 13:35:33.502394       8 log.go:172] (0xc00202cb00) (0xc00152eb40) Stream removed, broadcasting: 5
Feb 10 13:35:33.502: INFO: Exec stderr: ""
Feb 10 13:35:33.502: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:33.502: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:33.591406       8 log.go:172] (0xc000d21ce0) (0xc0029e5a40) Create stream
I0210 13:35:33.591457       8 log.go:172] (0xc000d21ce0) (0xc0029e5a40) Stream added, broadcasting: 1
I0210 13:35:33.603516       8 log.go:172] (0xc000d21ce0) Reply frame received for 1
I0210 13:35:33.603581       8 log.go:172] (0xc000d21ce0) (0xc001a9a460) Create stream
I0210 13:35:33.603598       8 log.go:172] (0xc000d21ce0) (0xc001a9a460) Stream added, broadcasting: 3
I0210 13:35:33.608065       8 log.go:172] (0xc000d21ce0) Reply frame received for 3
I0210 13:35:33.608196       8 log.go:172] (0xc000d21ce0) (0xc0029e5ae0) Create stream
I0210 13:35:33.608207       8 log.go:172] (0xc000d21ce0) (0xc0029e5ae0) Stream added, broadcasting: 5
I0210 13:35:33.612293       8 log.go:172] (0xc000d21ce0) Reply frame received for 5
I0210 13:35:33.714388       8 log.go:172] (0xc000d21ce0) Data frame received for 3
I0210 13:35:33.714428       8 log.go:172] (0xc001a9a460) (3) Data frame handling
I0210 13:35:33.714447       8 log.go:172] (0xc001a9a460) (3) Data frame sent
I0210 13:35:33.990357       8 log.go:172] (0xc000d21ce0) (0xc001a9a460) Stream removed, broadcasting: 3
I0210 13:35:33.990565       8 log.go:172] (0xc000d21ce0) Data frame received for 1
I0210 13:35:33.990596       8 log.go:172] (0xc0029e5a40) (1) Data frame handling
I0210 13:35:33.990653       8 log.go:172] (0xc0029e5a40) (1) Data frame sent
I0210 13:35:33.990710       8 log.go:172] (0xc000d21ce0) (0xc0029e5ae0) Stream removed, broadcasting: 5
I0210 13:35:33.990764       8 log.go:172] (0xc000d21ce0) (0xc0029e5a40) Stream removed, broadcasting: 1
I0210 13:35:33.990883       8 log.go:172] (0xc000d21ce0) Go away received
I0210 13:35:33.991040       8 log.go:172] (0xc000d21ce0) (0xc0029e5a40) Stream removed, broadcasting: 1
I0210 13:35:33.991052       8 log.go:172] (0xc000d21ce0) (0xc001a9a460) Stream removed, broadcasting: 3
I0210 13:35:33.991058       8 log.go:172] (0xc000d21ce0) (0xc0029e5ae0) Stream removed, broadcasting: 5
Feb 10 13:35:33.991: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 10 13:35:33.991: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:33.991: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:34.075866       8 log.go:172] (0xc0020c0840) (0xc0028aeb40) Create stream
I0210 13:35:34.075941       8 log.go:172] (0xc0020c0840) (0xc0028aeb40) Stream added, broadcasting: 1
I0210 13:35:34.088001       8 log.go:172] (0xc0020c0840) Reply frame received for 1
I0210 13:35:34.088062       8 log.go:172] (0xc0020c0840) (0xc00152ec80) Create stream
I0210 13:35:34.088086       8 log.go:172] (0xc0020c0840) (0xc00152ec80) Stream added, broadcasting: 3
I0210 13:35:34.093060       8 log.go:172] (0xc0020c0840) Reply frame received for 3
I0210 13:35:34.093112       8 log.go:172] (0xc0020c0840) (0xc001a9a500) Create stream
I0210 13:35:34.093123       8 log.go:172] (0xc0020c0840) (0xc001a9a500) Stream added, broadcasting: 5
I0210 13:35:34.099453       8 log.go:172] (0xc0020c0840) Reply frame received for 5
I0210 13:35:34.256531       8 log.go:172] (0xc0020c0840) Data frame received for 3
I0210 13:35:34.256558       8 log.go:172] (0xc00152ec80) (3) Data frame handling
I0210 13:35:34.256589       8 log.go:172] (0xc00152ec80) (3) Data frame sent
I0210 13:35:34.403150       8 log.go:172] (0xc0020c0840) (0xc00152ec80) Stream removed, broadcasting: 3
I0210 13:35:34.403208       8 log.go:172] (0xc0020c0840) Data frame received for 1
I0210 13:35:34.403243       8 log.go:172] (0xc0028aeb40) (1) Data frame handling
I0210 13:35:34.403272       8 log.go:172] (0xc0020c0840) (0xc001a9a500) Stream removed, broadcasting: 5
I0210 13:35:34.403305       8 log.go:172] (0xc0028aeb40) (1) Data frame sent
I0210 13:35:34.403319       8 log.go:172] (0xc0020c0840) (0xc0028aeb40) Stream removed, broadcasting: 1
I0210 13:35:34.403334       8 log.go:172] (0xc0020c0840) Go away received
I0210 13:35:34.403464       8 log.go:172] (0xc0020c0840) (0xc0028aeb40) Stream removed, broadcasting: 1
I0210 13:35:34.403475       8 log.go:172] (0xc0020c0840) (0xc00152ec80) Stream removed, broadcasting: 3
I0210 13:35:34.403479       8 log.go:172] (0xc0020c0840) (0xc001a9a500) Stream removed, broadcasting: 5
Feb 10 13:35:34.403: INFO: Exec stderr: ""
Feb 10 13:35:34.403: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:34.403: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:34.469274       8 log.go:172] (0xc001d2b290) (0xc00152f180) Create stream
I0210 13:35:34.469315       8 log.go:172] (0xc001d2b290) (0xc00152f180) Stream added, broadcasting: 1
I0210 13:35:34.474869       8 log.go:172] (0xc001d2b290) Reply frame received for 1
I0210 13:35:34.474904       8 log.go:172] (0xc001d2b290) (0xc001a9a6e0) Create stream
I0210 13:35:34.474935       8 log.go:172] (0xc001d2b290) (0xc001a9a6e0) Stream added, broadcasting: 3
I0210 13:35:34.477335       8 log.go:172] (0xc001d2b290) Reply frame received for 3
I0210 13:35:34.477443       8 log.go:172] (0xc001d2b290) (0xc0028aebe0) Create stream
I0210 13:35:34.477455       8 log.go:172] (0xc001d2b290) (0xc0028aebe0) Stream added, broadcasting: 5
I0210 13:35:34.479408       8 log.go:172] (0xc001d2b290) Reply frame received for 5
I0210 13:35:34.653164       8 log.go:172] (0xc001d2b290) Data frame received for 3
I0210 13:35:34.653222       8 log.go:172] (0xc001a9a6e0) (3) Data frame handling
I0210 13:35:34.653257       8 log.go:172] (0xc001a9a6e0) (3) Data frame sent
I0210 13:35:34.744213       8 log.go:172] (0xc001d2b290) Data frame received for 1
I0210 13:35:34.744266       8 log.go:172] (0xc00152f180) (1) Data frame handling
I0210 13:35:34.744304       8 log.go:172] (0xc00152f180) (1) Data frame sent
I0210 13:35:34.744340       8 log.go:172] (0xc001d2b290) (0xc00152f180) Stream removed, broadcasting: 1
I0210 13:35:34.745221       8 log.go:172] (0xc001d2b290) (0xc001a9a6e0) Stream removed, broadcasting: 3
I0210 13:35:34.745949       8 log.go:172] (0xc001d2b290) (0xc0028aebe0) Stream removed, broadcasting: 5
I0210 13:35:34.746274       8 log.go:172] (0xc001d2b290) Go away received
I0210 13:35:34.746475       8 log.go:172] (0xc001d2b290) (0xc00152f180) Stream removed, broadcasting: 1
I0210 13:35:34.746517       8 log.go:172] (0xc001d2b290) (0xc001a9a6e0) Stream removed, broadcasting: 3
I0210 13:35:34.746528       8 log.go:172] (0xc001d2b290) (0xc0028aebe0) Stream removed, broadcasting: 5
Feb 10 13:35:34.746: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 10 13:35:34.746: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:34.746: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:34.827733       8 log.go:172] (0xc001d2bb80) (0xc00152f360) Create stream
I0210 13:35:34.827771       8 log.go:172] (0xc001d2bb80) (0xc00152f360) Stream added, broadcasting: 1
I0210 13:35:34.833923       8 log.go:172] (0xc001d2bb80) Reply frame received for 1
I0210 13:35:34.833984       8 log.go:172] (0xc001d2bb80) (0xc001238000) Create stream
I0210 13:35:34.833996       8 log.go:172] (0xc001d2bb80) (0xc001238000) Stream added, broadcasting: 3
I0210 13:35:34.835714       8 log.go:172] (0xc001d2bb80) Reply frame received for 3
I0210 13:35:34.835749       8 log.go:172] (0xc001d2bb80) (0xc00152f540) Create stream
I0210 13:35:34.835764       8 log.go:172] (0xc001d2bb80) (0xc00152f540) Stream added, broadcasting: 5
I0210 13:35:34.837802       8 log.go:172] (0xc001d2bb80) Reply frame received for 5
I0210 13:35:34.924919       8 log.go:172] (0xc001d2bb80) Data frame received for 3
I0210 13:35:34.924988       8 log.go:172] (0xc001238000) (3) Data frame handling
I0210 13:35:34.925004       8 log.go:172] (0xc001238000) (3) Data frame sent
I0210 13:35:35.021651       8 log.go:172] (0xc001d2bb80) (0xc001238000) Stream removed, broadcasting: 3
I0210 13:35:35.021719       8 log.go:172] (0xc001d2bb80) Data frame received for 1
I0210 13:35:35.021744       8 log.go:172] (0xc00152f360) (1) Data frame handling
I0210 13:35:35.021756       8 log.go:172] (0xc001d2bb80) (0xc00152f540) Stream removed, broadcasting: 5
I0210 13:35:35.021770       8 log.go:172] (0xc00152f360) (1) Data frame sent
I0210 13:35:35.021779       8 log.go:172] (0xc001d2bb80) (0xc00152f360) Stream removed, broadcasting: 1
I0210 13:35:35.021792       8 log.go:172] (0xc001d2bb80) Go away received
I0210 13:35:35.021861       8 log.go:172] (0xc001d2bb80) (0xc00152f360) Stream removed, broadcasting: 1
I0210 13:35:35.021871       8 log.go:172] (0xc001d2bb80) (0xc001238000) Stream removed, broadcasting: 3
I0210 13:35:35.021879       8 log.go:172] (0xc001d2bb80) (0xc00152f540) Stream removed, broadcasting: 5
Feb 10 13:35:35.021: INFO: Exec stderr: ""
Feb 10 13:35:35.021: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:35.021: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:35.079123       8 log.go:172] (0xc001217b80) (0xc001238460) Create stream
I0210 13:35:35.079201       8 log.go:172] (0xc001217b80) (0xc001238460) Stream added, broadcasting: 1
I0210 13:35:35.085007       8 log.go:172] (0xc001217b80) Reply frame received for 1
I0210 13:35:35.085047       8 log.go:172] (0xc001217b80) (0xc0028aec80) Create stream
I0210 13:35:35.085064       8 log.go:172] (0xc001217b80) (0xc0028aec80) Stream added, broadcasting: 3
I0210 13:35:35.086440       8 log.go:172] (0xc001217b80) Reply frame received for 3
I0210 13:35:35.086462       8 log.go:172] (0xc001217b80) (0xc00152f5e0) Create stream
I0210 13:35:35.086469       8 log.go:172] (0xc001217b80) (0xc00152f5e0) Stream added, broadcasting: 5
I0210 13:35:35.090586       8 log.go:172] (0xc001217b80) Reply frame received for 5
I0210 13:35:35.168904       8 log.go:172] (0xc001217b80) Data frame received for 3
I0210 13:35:35.168940       8 log.go:172] (0xc0028aec80) (3) Data frame handling
I0210 13:35:35.168969       8 log.go:172] (0xc0028aec80) (3) Data frame sent
I0210 13:35:35.268124       8 log.go:172] (0xc001217b80) (0xc0028aec80) Stream removed, broadcasting: 3
I0210 13:35:35.268235       8 log.go:172] (0xc001217b80) Data frame received for 1
I0210 13:35:35.268245       8 log.go:172] (0xc001238460) (1) Data frame handling
I0210 13:35:35.268259       8 log.go:172] (0xc001238460) (1) Data frame sent
I0210 13:35:35.268274       8 log.go:172] (0xc001217b80) (0xc001238460) Stream removed, broadcasting: 1
I0210 13:35:35.268396       8 log.go:172] (0xc001217b80) (0xc00152f5e0) Stream removed, broadcasting: 5
I0210 13:35:35.268410       8 log.go:172] (0xc001217b80) Go away received
I0210 13:35:35.268484       8 log.go:172] (0xc001217b80) (0xc001238460) Stream removed, broadcasting: 1
I0210 13:35:35.268575       8 log.go:172] (0xc001217b80) (0xc0028aec80) Stream removed, broadcasting: 3
I0210 13:35:35.268594       8 log.go:172] (0xc001217b80) (0xc00152f5e0) Stream removed, broadcasting: 5
Feb 10 13:35:35.268: INFO: Exec stderr: ""
Feb 10 13:35:35.268: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:35.268: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:35.319396       8 log.go:172] (0xc0020c1a20) (0xc0028aee60) Create stream
I0210 13:35:35.319490       8 log.go:172] (0xc0020c1a20) (0xc0028aee60) Stream added, broadcasting: 1
I0210 13:35:35.327867       8 log.go:172] (0xc0020c1a20) Reply frame received for 1
I0210 13:35:35.327910       8 log.go:172] (0xc0020c1a20) (0xc001238640) Create stream
I0210 13:35:35.327917       8 log.go:172] (0xc0020c1a20) (0xc001238640) Stream added, broadcasting: 3
I0210 13:35:35.328904       8 log.go:172] (0xc0020c1a20) Reply frame received for 3
I0210 13:35:35.328926       8 log.go:172] (0xc0020c1a20) (0xc0028aef00) Create stream
I0210 13:35:35.328953       8 log.go:172] (0xc0020c1a20) (0xc0028aef00) Stream added, broadcasting: 5
I0210 13:35:35.330040       8 log.go:172] (0xc0020c1a20) Reply frame received for 5
I0210 13:35:35.402320       8 log.go:172] (0xc0020c1a20) Data frame received for 3
I0210 13:35:35.402364       8 log.go:172] (0xc001238640) (3) Data frame handling
I0210 13:35:35.402384       8 log.go:172] (0xc001238640) (3) Data frame sent
I0210 13:35:35.505385       8 log.go:172] (0xc0020c1a20) (0xc001238640) Stream removed, broadcasting: 3
I0210 13:35:35.505512       8 log.go:172] (0xc0020c1a20) Data frame received for 1
I0210 13:35:35.505541       8 log.go:172] (0xc0020c1a20) (0xc0028aef00) Stream removed, broadcasting: 5
I0210 13:35:35.505580       8 log.go:172] (0xc0028aee60) (1) Data frame handling
I0210 13:35:35.505596       8 log.go:172] (0xc0028aee60) (1) Data frame sent
I0210 13:35:35.505701       8 log.go:172] (0xc0020c1a20) (0xc0028aee60) Stream removed, broadcasting: 1
I0210 13:35:35.506037       8 log.go:172] (0xc0020c1a20) (0xc0028aee60) Stream removed, broadcasting: 1
I0210 13:35:35.506185       8 log.go:172] (0xc0020c1a20) (0xc001238640) Stream removed, broadcasting: 3
I0210 13:35:35.506204       8 log.go:172] (0xc0020c1a20) (0xc0028aef00) Stream removed, broadcasting: 5
I0210 13:35:35.506258       8 log.go:172] (0xc0020c1a20) Go away received
Feb 10 13:35:35.506: INFO: Exec stderr: ""
Feb 10 13:35:35.506: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2201 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:35:35.506: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:35:35.564623       8 log.go:172] (0xc002164f20) (0xc0029e5e00) Create stream
I0210 13:35:35.564721       8 log.go:172] (0xc002164f20) (0xc0029e5e00) Stream added, broadcasting: 1
I0210 13:35:35.572480       8 log.go:172] (0xc002164f20) Reply frame received for 1
I0210 13:35:35.572515       8 log.go:172] (0xc002164f20) (0xc001a9a8c0) Create stream
I0210 13:35:35.572526       8 log.go:172] (0xc002164f20) (0xc001a9a8c0) Stream added, broadcasting: 3
I0210 13:35:35.574112       8 log.go:172] (0xc002164f20) Reply frame received for 3
I0210 13:35:35.574155       8 log.go:172] (0xc002164f20) (0xc0028aefa0) Create stream
I0210 13:35:35.574173       8 log.go:172] (0xc002164f20) (0xc0028aefa0) Stream added, broadcasting: 5
I0210 13:35:35.576453       8 log.go:172] (0xc002164f20) Reply frame received for 5
I0210 13:35:35.696732       8 log.go:172] (0xc002164f20) Data frame received for 3
I0210 13:35:35.696853       8 log.go:172] (0xc001a9a8c0) (3) Data frame handling
I0210 13:35:35.696921       8 log.go:172] (0xc001a9a8c0) (3) Data frame sent
I0210 13:35:35.797579       8 log.go:172] (0xc002164f20) Data frame received for 1
I0210 13:35:35.797679       8 log.go:172] (0xc002164f20) (0xc001a9a8c0) Stream removed, broadcasting: 3
I0210 13:35:35.797774       8 log.go:172] (0xc0029e5e00) (1) Data frame handling
I0210 13:35:35.797832       8 log.go:172] (0xc0029e5e00) (1) Data frame sent
I0210 13:35:35.797851       8 log.go:172] (0xc002164f20) (0xc0029e5e00) Stream removed, broadcasting: 1
I0210 13:35:35.797950       8 log.go:172] (0xc002164f20) (0xc0028aefa0) Stream removed, broadcasting: 5
I0210 13:35:35.798048       8 log.go:172] (0xc002164f20) Go away received
I0210 13:35:35.798118       8 log.go:172] (0xc002164f20) (0xc0029e5e00) Stream removed, broadcasting: 1
I0210 13:35:35.798166       8 log.go:172] (0xc002164f20) (0xc001a9a8c0) Stream removed, broadcasting: 3
I0210 13:35:35.798183       8 log.go:172] (0xc002164f20) (0xc0028aefa0) Stream removed, broadcasting: 5
Feb 10 13:35:35.798: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:35:35.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2201" for this suite.
Feb 10 13:36:19.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:36:19.951: INFO: namespace e2e-kubelet-etc-hosts-2201 deletion completed in 44.144433706s

• [SLOW TEST:67.910 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:36:19.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:36:20.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:36:28.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8367" for this suite.
Feb 10 13:37:10.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:37:10.983: INFO: namespace pods-8367 deletion completed in 42.243069484s

• [SLOW TEST:51.032 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:37:10.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-68e67fb9-8571-4f96-98f1-714b1ed09515 in namespace container-probe-6841
Feb 10 13:37:19.124: INFO: Started pod busybox-68e67fb9-8571-4f96-98f1-714b1ed09515 in namespace container-probe-6841
STEP: checking the pod's current state and verifying that restartCount is present
Feb 10 13:37:19.128: INFO: Initial restart count of pod busybox-68e67fb9-8571-4f96-98f1-714b1ed09515 is 0
Feb 10 13:38:09.363: INFO: Restart count of pod container-probe-6841/busybox-68e67fb9-8571-4f96-98f1-714b1ed09515 is now 1 (50.235203536s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:38:09.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6841" for this suite.
Feb 10 13:38:19.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:38:19.624: INFO: namespace container-probe-6841 deletion completed in 10.181818903s

• [SLOW TEST:68.641 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:38:19.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 10 13:38:19.811: INFO: Waiting up to 5m0s for pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832" in namespace "emptydir-8835" to be "success or failure"
Feb 10 13:38:19.829: INFO: Pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069001ms
Feb 10 13:38:21.868: INFO: Pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056835208s
Feb 10 13:38:23.907: INFO: Pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096109124s
Feb 10 13:38:25.924: INFO: Pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112947123s
Feb 10 13:38:27.931: INFO: Pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120131847s
STEP: Saw pod success
Feb 10 13:38:27.931: INFO: Pod "pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832" satisfied condition "success or failure"
Feb 10 13:38:27.937: INFO: Trying to get logs from node iruya-node pod pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832 container test-container: 
STEP: delete the pod
Feb 10 13:38:28.087: INFO: Waiting for pod pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832 to disappear
Feb 10 13:38:28.106: INFO: Pod pod-ef3fb1cf-c3c3-48c4-bd05-7b29f63a7832 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:38:28.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8835" for this suite.
Feb 10 13:38:34.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:38:34.292: INFO: namespace emptydir-8835 deletion completed in 6.167258541s

• [SLOW TEST:14.667 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:38:34.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 10 13:38:34.415: INFO: Waiting up to 5m0s for pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58" in namespace "emptydir-7063" to be "success or failure"
Feb 10 13:38:34.418: INFO: Pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.122203ms
Feb 10 13:38:36.429: INFO: Pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013447431s
Feb 10 13:38:38.439: INFO: Pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02419997s
Feb 10 13:38:40.449: INFO: Pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033789919s
Feb 10 13:38:42.489: INFO: Pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074109973s
STEP: Saw pod success
Feb 10 13:38:42.489: INFO: Pod "pod-a275d180-6fe3-48b0-9582-6d64a29dcc58" satisfied condition "success or failure"
Feb 10 13:38:42.494: INFO: Trying to get logs from node iruya-node pod pod-a275d180-6fe3-48b0-9582-6d64a29dcc58 container test-container: 
STEP: delete the pod
Feb 10 13:38:42.572: INFO: Waiting for pod pod-a275d180-6fe3-48b0-9582-6d64a29dcc58 to disappear
Feb 10 13:38:42.578: INFO: Pod pod-a275d180-6fe3-48b0-9582-6d64a29dcc58 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:38:42.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7063" for this suite.
Feb 10 13:38:48.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:38:48.805: INFO: namespace emptydir-7063 deletion completed in 6.222159647s

• [SLOW TEST:14.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:38:48.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:38:48.924: INFO: Creating deployment "test-recreate-deployment"
Feb 10 13:38:48.931: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 10 13:38:48.936: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 10 13:38:50.963: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 10 13:38:50.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938728, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:38:52.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938728, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:38:54.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938728, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:38:56.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716938728, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:38:58.980: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 10 13:38:58.991: INFO: Updating deployment test-recreate-deployment
Feb 10 13:38:58.991: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 10 13:38:59.311: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2763,SelfLink:/apis/apps/v1/namespaces/deployment-2763/deployments/test-recreate-deployment,UID:8aed085e-f47f-4d48-8fc9-c05f009d6ef4,ResourceVersion:23823852,Generation:2,CreationTimestamp:2020-02-10 13:38:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-10 13:38:59 +0000 UTC 2020-02-10 13:38:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-10 13:38:59 +0000 UTC 2020-02-10 13:38:48 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 10 13:38:59.317: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2763,SelfLink:/apis/apps/v1/namespaces/deployment-2763/replicasets/test-recreate-deployment-5c8c9cc69d,UID:d4472ca9-93ba-48df-a155-c1c70d6ffb2f,ResourceVersion:23823850,Generation:1,CreationTimestamp:2020-02-10 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8aed085e-f47f-4d48-8fc9-c05f009d6ef4 0xc002c93287 0xc002c93288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 13:38:59.317: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 10 13:38:59.317: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2763,SelfLink:/apis/apps/v1/namespaces/deployment-2763/replicasets/test-recreate-deployment-6df85df6b9,UID:6f2cdb5a-2702-49e7-b64d-6e1bfda7e62f,ResourceVersion:23823841,Generation:2,CreationTimestamp:2020-02-10 13:38:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8aed085e-f47f-4d48-8fc9-c05f009d6ef4 0xc002c93357 0xc002c93358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 13:38:59.327: INFO: Pod "test-recreate-deployment-5c8c9cc69d-dq2ts" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-dq2ts,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2763,SelfLink:/api/v1/namespaces/deployment-2763/pods/test-recreate-deployment-5c8c9cc69d-dq2ts,UID:33219850-b3d4-4f61-8686-c2fdfff734b8,ResourceVersion:23823853,Generation:0,CreationTimestamp:2020-02-10 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d d4472ca9-93ba-48df-a155-c1c70d6ffb2f 0xc002c93c77 0xc002c93c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xq2j6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xq2j6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xq2j6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c93cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c93d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 13:38:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 13:38:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-10 13:38:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:38:59.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2763" for this suite.
Feb 10 13:39:05.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:39:05.502: INFO: namespace deployment-2763 deletion completed in 6.171055833s

• [SLOW TEST:16.698 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:39:05.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 10 13:39:17.316: INFO: Successfully updated pod "pod-update-d382824c-5d73-4a23-ac44-e7d91fc552d4"
STEP: verifying the updated pod is in kubernetes
Feb 10 13:39:17.435: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:39:17.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8108" for this suite.
Feb 10 13:39:39.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:39:39.573: INFO: namespace pods-8108 deletion completed in 22.131131811s

• [SLOW TEST:34.071 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:39:39.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 10 13:39:47.750: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 10 13:39:57.993: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:39:57.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4109" for this suite.
Feb 10 13:40:04.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:40:04.122: INFO: namespace pods-4109 deletion completed in 6.11740235s

• [SLOW TEST:24.548 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:40:04.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 10 13:40:22.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:22.336: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:24.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:24.346: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:26.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:26.345: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:28.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:28.350: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:30.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:30.346: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:32.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:32.344: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:34.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:34.352: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:36.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:36.345: INFO: Pod pod-with-poststart-http-hook still exists
Feb 10 13:40:38.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 10 13:40:38.346: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:40:38.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3982" for this suite.
Feb 10 13:41:08.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:41:08.549: INFO: namespace container-lifecycle-hook-3982 deletion completed in 30.195938901s

• [SLOW TEST:64.427 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:41:08.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:41:08.710: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 10 13:41:08.737: INFO: Number of nodes with available pods: 0
Feb 10 13:41:08.737: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:09.753: INFO: Number of nodes with available pods: 0
Feb 10 13:41:09.753: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:11.295: INFO: Number of nodes with available pods: 0
Feb 10 13:41:11.295: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:11.753: INFO: Number of nodes with available pods: 0
Feb 10 13:41:11.753: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:12.771: INFO: Number of nodes with available pods: 0
Feb 10 13:41:12.771: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:15.323: INFO: Number of nodes with available pods: 0
Feb 10 13:41:15.323: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:15.756: INFO: Number of nodes with available pods: 0
Feb 10 13:41:15.756: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:16.749: INFO: Number of nodes with available pods: 0
Feb 10 13:41:16.750: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:17.778: INFO: Number of nodes with available pods: 0
Feb 10 13:41:17.778: INFO: Node iruya-node is running more than one daemon pod
Feb 10 13:41:18.825: INFO: Number of nodes with available pods: 2
Feb 10 13:41:18.825: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 10 13:41:18.894: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:18.894: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:19.933: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:19.933: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:20.931: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:20.931: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:21.925: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:21.925: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:22.926: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:22.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:23.951: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:23.951: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:24.925: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:24.925: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:24.925: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:25.979: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:25.979: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:25.979: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:26.926: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:26.926: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:26.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:27.946: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:27.946: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:27.946: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:28.924: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:28.924: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:28.924: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:29.955: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:29.955: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:29.956: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:30.926: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:30.926: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:30.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:31.924: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:31.924: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:31.924: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:32.925: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:32.925: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:32.925: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:33.925: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:33.925: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:33.925: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:34.923: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:34.923: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:34.923: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:35.929: INFO: Wrong image for pod: daemon-set-hgmz5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:35.929: INFO: Pod daemon-set-hgmz5 is not available
Feb 10 13:41:35.929: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:36.935: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:36.936: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:37.931: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:37.931: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:38.934: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:38.934: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:39.929: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:39.929: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:40.926: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:40.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:41.930: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:41.930: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:43.575: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:43.575: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:43.926: INFO: Pod daemon-set-hzfsh is not available
Feb 10 13:41:43.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:44.932: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:45.943: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:46.932: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:47.936: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:48.924: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:48.924: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:49.930: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:49.931: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:50.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:50.926: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:51.932: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:51.932: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:52.926: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:52.926: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:53.927: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:53.927: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:54.931: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:54.931: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:55.925: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:55.925: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:56.932: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:56.933: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:57.954: INFO: Wrong image for pod: daemon-set-jvz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 10 13:41:57.954: INFO: Pod daemon-set-jvz6z is not available
Feb 10 13:41:59.361: INFO: Pod daemon-set-85bj9 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 10 13:41:59.384: INFO: Number of nodes with available pods: 1
Feb 10 13:41:59.384: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:00.398: INFO: Number of nodes with available pods: 1
Feb 10 13:42:00.398: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:01.398: INFO: Number of nodes with available pods: 1
Feb 10 13:42:01.398: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:02.708: INFO: Number of nodes with available pods: 1
Feb 10 13:42:02.708: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:03.400: INFO: Number of nodes with available pods: 1
Feb 10 13:42:03.400: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:04.493: INFO: Number of nodes with available pods: 1
Feb 10 13:42:04.493: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:05.402: INFO: Number of nodes with available pods: 1
Feb 10 13:42:05.402: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 13:42:06.400: INFO: Number of nodes with available pods: 2
Feb 10 13:42:06.400: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3379, will wait for the garbage collector to delete the pods
Feb 10 13:42:06.489: INFO: Deleting DaemonSet.extensions daemon-set took: 13.446958ms
Feb 10 13:42:06.890: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.694261ms
Feb 10 13:42:17.897: INFO: Number of nodes with available pods: 0
Feb 10 13:42:17.897: INFO: Number of running nodes: 0, number of available pods: 0
Feb 10 13:42:17.900: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3379/daemonsets","resourceVersion":"23824315"},"items":null}

Feb 10 13:42:17.903: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3379/pods","resourceVersion":"23824315"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:42:17.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3379" for this suite.
Feb 10 13:42:23.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:42:24.082: INFO: namespace daemonsets-3379 deletion completed in 6.158669104s

• [SLOW TEST:75.532 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:42:24.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:42:24.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43" in namespace "projected-5315" to be "success or failure"
Feb 10 13:42:24.210: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 27.744565ms
Feb 10 13:42:26.220: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038121985s
Feb 10 13:42:28.226: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044385138s
Feb 10 13:42:30.237: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054868199s
Feb 10 13:42:32.246: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064339069s
Feb 10 13:42:34.253: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070885989s
STEP: Saw pod success
Feb 10 13:42:34.253: INFO: Pod "downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43" satisfied condition "success or failure"
Feb 10 13:42:34.257: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43 container client-container: 
STEP: delete the pod
Feb 10 13:42:34.309: INFO: Waiting for pod downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43 to disappear
Feb 10 13:42:34.313: INFO: Pod downwardapi-volume-41d741c3-ec9a-4db2-8fe1-e168198d5a43 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:42:34.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5315" for this suite.
Feb 10 13:42:40.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:42:40.494: INFO: namespace projected-5315 deletion completed in 6.17347982s

• [SLOW TEST:16.411 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:42:40.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 10 13:42:40.592: INFO: Waiting up to 5m0s for pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45" in namespace "downward-api-3576" to be "success or failure"
Feb 10 13:42:40.647: INFO: Pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45": Phase="Pending", Reason="", readiness=false. Elapsed: 55.245974ms
Feb 10 13:42:42.662: INFO: Pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070799209s
Feb 10 13:42:44.672: INFO: Pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080416463s
Feb 10 13:42:46.681: INFO: Pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089519849s
Feb 10 13:42:48.694: INFO: Pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102826571s
STEP: Saw pod success
Feb 10 13:42:48.695: INFO: Pod "downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45" satisfied condition "success or failure"
Feb 10 13:42:48.699: INFO: Trying to get logs from node iruya-node pod downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45 container dapi-container: 
STEP: delete the pod
Feb 10 13:42:48.799: INFO: Waiting for pod downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45 to disappear
Feb 10 13:42:48.805: INFO: Pod downward-api-1ac001fa-eb5b-4d86-81b3-c828435d4a45 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:42:48.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3576" for this suite.
Feb 10 13:42:54.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:42:54.995: INFO: namespace downward-api-3576 deletion completed in 6.183512249s

• [SLOW TEST:14.501 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:42:54.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-234e5947-7a0c-4dbf-973d-7b67fbb76a55
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-234e5947-7a0c-4dbf-973d-7b67fbb76a55
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:44:33.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1934" for this suite.
Feb 10 13:44:55.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:44:55.406: INFO: namespace projected-1934 deletion completed in 22.16230987s

• [SLOW TEST:120.410 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:44:55.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-956a35cf-3ec9-4656-ba62-a5500f2ab352
STEP: Creating a pod to test consume configMaps
Feb 10 13:44:55.532: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee" in namespace "projected-9917" to be "success or failure"
Feb 10 13:44:55.558: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee": Phase="Pending", Reason="", readiness=false. Elapsed: 25.936301ms
Feb 10 13:44:57.567: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035201308s
Feb 10 13:44:59.616: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084009852s
Feb 10 13:45:01.892: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359842469s
Feb 10 13:45:03.907: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee": Phase="Running", Reason="", readiness=true. Elapsed: 8.374957987s
Feb 10 13:45:05.917: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384746345s
STEP: Saw pod success
Feb 10 13:45:05.917: INFO: Pod "pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee" satisfied condition "success or failure"
Feb 10 13:45:05.923: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee container projected-configmap-volume-test: 
STEP: delete the pod
Feb 10 13:45:06.074: INFO: Waiting for pod pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee to disappear
Feb 10 13:45:06.085: INFO: Pod pod-projected-configmaps-fe6580e9-36d1-4e5b-bf33-95d6bf9f91ee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:45:06.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9917" for this suite.
Feb 10 13:45:12.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:45:12.243: INFO: namespace projected-9917 deletion completed in 6.15452199s

• [SLOW TEST:16.836 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:45:12.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-704.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-704.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-704.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-704.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-704.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-704.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-704.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-704.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-704.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 40.73.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.73.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.73.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.73.40_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-704.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-704.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-704.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-704.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-704.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-704.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-704.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-704.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-704.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-704.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 40.73.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.73.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.73.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.73.40_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 10 13:45:28.507: INFO: Unable to read wheezy_udp@dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.514: INFO: Unable to read wheezy_tcp@dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.519: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.524: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.529: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.534: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.539: INFO: Unable to read wheezy_udp@PodARecord from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.545: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.549: INFO: Unable to read 10.102.73.40_udp@PTR from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.553: INFO: Unable to read 10.102.73.40_tcp@PTR from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.559: INFO: Unable to read jessie_udp@dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.575: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.587: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.594: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.600: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-704.svc.cluster.local from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.605: INFO: Unable to read jessie_udp@PodARecord from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.609: INFO: Unable to read jessie_tcp@PodARecord from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.613: INFO: Unable to read 10.102.73.40_udp@PTR from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.619: INFO: Unable to read 10.102.73.40_tcp@PTR from pod dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84: the server could not find the requested resource (get pods dns-test-1f274283-6b39-4a01-a15d-41191f04da84)
Feb 10 13:45:28.619: INFO: Lookups using dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84 failed for: [wheezy_udp@dns-test-service.dns-704.svc.cluster.local wheezy_tcp@dns-test-service.dns-704.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-704.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-704.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-704.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-704.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.73.40_udp@PTR 10.102.73.40_tcp@PTR jessie_udp@dns-test-service.dns-704.svc.cluster.local jessie_tcp@dns-test-service.dns-704.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-704.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-704.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-704.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-704.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.102.73.40_udp@PTR 10.102.73.40_tcp@PTR]

Feb 10 13:45:33.923: INFO: DNS probes using dns-704/dns-test-1f274283-6b39-4a01-a15d-41191f04da84 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:45:34.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-704" for this suite.
Feb 10 13:45:40.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:45:40.624: INFO: namespace dns-704 deletion completed in 6.179121008s

• [SLOW TEST:28.381 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:45:40.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:46:12.759: INFO: Container started at 2020-02-10 13:45:47 +0000 UTC, pod became ready at 2020-02-10 13:46:11 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:46:12.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2122" for this suite.
Feb 10 13:46:52.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:46:53.043: INFO: namespace container-probe-2122 deletion completed in 40.280940401s

• [SLOW TEST:72.419 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:46:53.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:46:53.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3" in namespace "projected-5949" to be "success or failure"
Feb 10 13:46:53.168: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.722541ms
Feb 10 13:46:55.177: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01598524s
Feb 10 13:46:57.202: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040194433s
Feb 10 13:46:59.212: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050978327s
Feb 10 13:47:02.348: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.186642761s
Feb 10 13:47:04.355: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.193353941s
Feb 10 13:47:06.365: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.203934173s
STEP: Saw pod success
Feb 10 13:47:06.365: INFO: Pod "downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3" satisfied condition "success or failure"
Feb 10 13:47:06.369: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3 container client-container: 
STEP: delete the pod
Feb 10 13:47:06.492: INFO: Waiting for pod downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3 to disappear
Feb 10 13:47:06.503: INFO: Pod downwardapi-volume-10418d20-e362-4793-a43b-7509066d9fd3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:47:06.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5949" for this suite.
Feb 10 13:47:12.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:47:12.749: INFO: namespace projected-5949 deletion completed in 6.236697656s

• [SLOW TEST:19.706 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:47:12.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 10 13:47:12.875: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 10 13:47:13.623: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 10 13:47:15.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:47:17.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:47:19.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:47:21.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:47:23.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716939233, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 13:47:29.715: INFO: Waited 3.776508274s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:47:30.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3152" for this suite.
Feb 10 13:47:36.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:47:36.614: INFO: namespace aggregator-3152 deletion completed in 6.143453829s

• [SLOW TEST:23.864 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:47:36.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 10 13:47:36.786: INFO: Waiting up to 5m0s for pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2" in namespace "emptydir-1321" to be "success or failure"
Feb 10 13:47:36.796: INFO: Pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.095101ms
Feb 10 13:47:38.805: INFO: Pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018614667s
Feb 10 13:47:40.813: INFO: Pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026729125s
Feb 10 13:47:42.820: INFO: Pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033209704s
Feb 10 13:47:44.827: INFO: Pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041133542s
STEP: Saw pod success
Feb 10 13:47:44.828: INFO: Pod "pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2" satisfied condition "success or failure"
Feb 10 13:47:44.831: INFO: Trying to get logs from node iruya-node pod pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2 container test-container: 
STEP: delete the pod
Feb 10 13:47:44.956: INFO: Waiting for pod pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2 to disappear
Feb 10 13:47:44.967: INFO: Pod pod-08a13ba1-8c5d-4c39-b2f1-f464d435a1b2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:47:44.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1321" for this suite.
Feb 10 13:47:50.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:47:51.116: INFO: namespace emptydir-1321 deletion completed in 6.142088914s

• [SLOW TEST:14.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:47:51.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-db6k
STEP: Creating a pod to test atomic-volume-subpath
Feb 10 13:47:51.233: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-db6k" in namespace "subpath-6260" to be "success or failure"
Feb 10 13:47:51.245: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Pending", Reason="", readiness=false. Elapsed: 11.987498ms
Feb 10 13:47:53.250: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017086275s
Feb 10 13:47:55.279: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04638573s
Feb 10 13:47:57.288: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054595299s
Feb 10 13:47:59.295: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 8.06201744s
Feb 10 13:48:01.304: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 10.071282918s
Feb 10 13:48:03.313: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 12.079526348s
Feb 10 13:48:05.320: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 14.086844035s
Feb 10 13:48:07.331: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 16.097732359s
Feb 10 13:48:09.340: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 18.106539937s
Feb 10 13:48:11.350: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 20.11712596s
Feb 10 13:48:13.359: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 22.125926821s
Feb 10 13:48:15.371: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 24.137950874s
Feb 10 13:48:17.385: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 26.152007699s
Feb 10 13:48:19.394: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Running", Reason="", readiness=true. Elapsed: 28.161157729s
Feb 10 13:48:21.406: INFO: Pod "pod-subpath-test-configmap-db6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.172721986s
STEP: Saw pod success
Feb 10 13:48:21.406: INFO: Pod "pod-subpath-test-configmap-db6k" satisfied condition "success or failure"
Feb 10 13:48:21.411: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-db6k container test-container-subpath-configmap-db6k: 
STEP: delete the pod
Feb 10 13:48:21.476: INFO: Waiting for pod pod-subpath-test-configmap-db6k to disappear
Feb 10 13:48:21.482: INFO: Pod pod-subpath-test-configmap-db6k no longer exists
STEP: Deleting pod pod-subpath-test-configmap-db6k
Feb 10 13:48:21.482: INFO: Deleting pod "pod-subpath-test-configmap-db6k" in namespace "subpath-6260"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:48:21.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6260" for this suite.
Feb 10 13:48:27.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:48:27.738: INFO: namespace subpath-6260 deletion completed in 6.235642743s

• [SLOW TEST:36.622 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:48:27.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-526a51f6-3ba2-40d8-a0fc-9327ba7af90d
STEP: Creating a pod to test consume secrets
Feb 10 13:48:27.879: INFO: Waiting up to 5m0s for pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99" in namespace "secrets-3550" to be "success or failure"
Feb 10 13:48:27.886: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361551ms
Feb 10 13:48:29.896: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016770278s
Feb 10 13:48:31.909: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029799431s
Feb 10 13:48:33.928: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048320329s
Feb 10 13:48:35.937: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05771041s
Feb 10 13:48:37.948: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068440952s
STEP: Saw pod success
Feb 10 13:48:37.948: INFO: Pod "pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99" satisfied condition "success or failure"
Feb 10 13:48:37.952: INFO: Trying to get logs from node iruya-node pod pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99 container secret-volume-test: 
STEP: delete the pod
Feb 10 13:48:38.008: INFO: Waiting for pod pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99 to disappear
Feb 10 13:48:38.016: INFO: Pod pod-secrets-74bc313b-22c0-4b2f-bf91-f435d24cac99 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:48:38.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3550" for this suite.
Feb 10 13:48:44.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:48:44.222: INFO: namespace secrets-3550 deletion completed in 6.193356445s

• [SLOW TEST:16.483 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:48:44.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6873
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6873
STEP: Creating statefulset with conflicting port in namespace statefulset-6873
STEP: Waiting until pod test-pod will start running in namespace statefulset-6873
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6873
Feb 10 13:48:56.377: INFO: Observed stateful pod in namespace: statefulset-6873, name: ss-0, uid: cc1f9857-4808-4c7a-9783-b634d0b869fa, status phase: Pending. Waiting for statefulset controller to delete.
Feb 10 13:48:56.496: INFO: Observed stateful pod in namespace: statefulset-6873, name: ss-0, uid: cc1f9857-4808-4c7a-9783-b634d0b869fa, status phase: Failed. Waiting for statefulset controller to delete.
Feb 10 13:48:56.518: INFO: Observed stateful pod in namespace: statefulset-6873, name: ss-0, uid: cc1f9857-4808-4c7a-9783-b634d0b869fa, status phase: Failed. Waiting for statefulset controller to delete.
Feb 10 13:48:56.533: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6873
STEP: Removing pod with conflicting port in namespace statefulset-6873
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6873 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 10 13:49:04.678: INFO: Deleting all statefulset in ns statefulset-6873
Feb 10 13:49:04.683: INFO: Scaling statefulset ss to 0
Feb 10 13:49:24.713: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 13:49:24.720: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:49:24.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6873" for this suite.
Feb 10 13:49:32.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:49:32.956: INFO: namespace statefulset-6873 deletion completed in 8.141201998s

• [SLOW TEST:48.733 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:49:32.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 13:49:33.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d" in namespace "projected-547" to be "success or failure"
Feb 10 13:49:33.135: INFO: Pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 49.773786ms
Feb 10 13:49:35.142: INFO: Pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056427503s
Feb 10 13:49:37.157: INFO: Pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071835422s
Feb 10 13:49:39.165: INFO: Pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080089812s
Feb 10 13:49:41.172: INFO: Pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086461151s
STEP: Saw pod success
Feb 10 13:49:41.172: INFO: Pod "downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d" satisfied condition "success or failure"
Feb 10 13:49:41.176: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d container client-container: 
STEP: delete the pod
Feb 10 13:49:41.320: INFO: Waiting for pod downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d to disappear
Feb 10 13:49:41.333: INFO: Pod downwardapi-volume-f78bbb77-2441-46b2-b069-3cfe7759cc0d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:49:41.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-547" for this suite.
Feb 10 13:49:47.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:49:47.496: INFO: namespace projected-547 deletion completed in 6.158910408s

• [SLOW TEST:14.540 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:49:47.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 10 13:49:47.617: INFO: Waiting up to 5m0s for pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b" in namespace "downward-api-524" to be "success or failure"
Feb 10 13:49:47.639: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.660848ms
Feb 10 13:49:49.657: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039820888s
Feb 10 13:49:51.766: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149075431s
Feb 10 13:49:53.782: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164845954s
Feb 10 13:49:55.796: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179292989s
Feb 10 13:49:57.811: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.193749903s
STEP: Saw pod success
Feb 10 13:49:57.811: INFO: Pod "downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b" satisfied condition "success or failure"
Feb 10 13:49:57.815: INFO: Trying to get logs from node iruya-node pod downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b container dapi-container: 
STEP: delete the pod
Feb 10 13:49:58.062: INFO: Waiting for pod downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b to disappear
Feb 10 13:49:58.082: INFO: Pod downward-api-c425615d-c785-42dc-ae4d-03e2f3fd1e0b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:49:58.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-524" for this suite.
Feb 10 13:50:04.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:50:04.221: INFO: namespace downward-api-524 deletion completed in 6.133389695s

• [SLOW TEST:16.726 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:50:04.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 10 13:50:04.353: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 10 13:50:09.366: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:50:09.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2825" for this suite.
Feb 10 13:50:17.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:50:17.674: INFO: namespace replication-controller-2825 deletion completed in 8.221435509s

• [SLOW TEST:13.452 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:50:17.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e8a384b7-afec-4ae1-9eaf-c970989c7f58
STEP: Creating a pod to test consume configMaps
Feb 10 13:50:17.931: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2" in namespace "projected-1314" to be "success or failure"
Feb 10 13:50:17.940: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.968962ms
Feb 10 13:50:19.960: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029033407s
Feb 10 13:50:22.020: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089303638s
Feb 10 13:50:24.039: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107768834s
Feb 10 13:50:26.049: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117583365s
Feb 10 13:50:28.071: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139767512s
STEP: Saw pod success
Feb 10 13:50:28.071: INFO: Pod "pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2" satisfied condition "success or failure"
Feb 10 13:50:28.075: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 10 13:50:28.321: INFO: Waiting for pod pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2 to disappear
Feb 10 13:50:28.335: INFO: Pod pod-projected-configmaps-3685cc82-6e86-4147-b244-4aa51c0317a2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:50:28.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1314" for this suite.
Feb 10 13:50:34.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:50:34.669: INFO: namespace projected-1314 deletion completed in 6.326461201s

• [SLOW TEST:16.995 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:50:34.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:50:42.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6183" for this suite.
Feb 10 13:51:25.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:51:25.113: INFO: namespace kubelet-test-6183 deletion completed in 42.134856997s

• [SLOW TEST:50.444 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:51:25.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 10 13:51:25.194: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix693986505/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:51:25.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9259" for this suite.
Feb 10 13:51:31.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:51:31.480: INFO: namespace kubectl-9259 deletion completed in 6.194560965s

• [SLOW TEST:6.367 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:51:31.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-bf97ce14-6365-451a-881c-4ec700ed9e77
STEP: Creating a pod to test consume configMaps
Feb 10 13:51:31.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04" in namespace "configmap-4497" to be "success or failure"
Feb 10 13:51:31.688: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04": Phase="Pending", Reason="", readiness=false. Elapsed: 24.154529ms
Feb 10 13:51:33.701: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036847483s
Feb 10 13:51:35.750: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085984976s
Feb 10 13:51:37.760: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095577022s
Feb 10 13:51:39.773: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108523344s
Feb 10 13:51:41.784: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119902666s
STEP: Saw pod success
Feb 10 13:51:41.784: INFO: Pod "pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04" satisfied condition "success or failure"
Feb 10 13:51:41.793: INFO: Trying to get logs from node iruya-node pod pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04 container configmap-volume-test: 
STEP: delete the pod
Feb 10 13:51:41.917: INFO: Waiting for pod pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04 to disappear
Feb 10 13:51:41.933: INFO: Pod pod-configmaps-25c1e990-7ad0-46a3-abe0-6367bef60b04 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:51:41.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4497" for this suite.
Feb 10 13:51:48.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:51:48.140: INFO: namespace configmap-4497 deletion completed in 6.196191371s

• [SLOW TEST:16.659 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:51:48.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1226
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 10 13:51:48.216: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 10 13:52:18.417: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1226 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:52:18.417: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:52:18.529879       8 log.go:172] (0xc00216b550) (0xc002b87360) Create stream
I0210 13:52:18.530008       8 log.go:172] (0xc00216b550) (0xc002b87360) Stream added, broadcasting: 1
I0210 13:52:18.545132       8 log.go:172] (0xc00216b550) Reply frame received for 1
I0210 13:52:18.545244       8 log.go:172] (0xc00216b550) (0xc0003a7860) Create stream
I0210 13:52:18.545271       8 log.go:172] (0xc00216b550) (0xc0003a7860) Stream added, broadcasting: 3
I0210 13:52:18.547450       8 log.go:172] (0xc00216b550) Reply frame received for 3
I0210 13:52:18.547481       8 log.go:172] (0xc00216b550) (0xc0003a7a40) Create stream
I0210 13:52:18.547492       8 log.go:172] (0xc00216b550) (0xc0003a7a40) Stream added, broadcasting: 5
I0210 13:52:18.549602       8 log.go:172] (0xc00216b550) Reply frame received for 5
I0210 13:52:18.848137       8 log.go:172] (0xc00216b550) Data frame received for 3
I0210 13:52:18.848164       8 log.go:172] (0xc0003a7860) (3) Data frame handling
I0210 13:52:18.848185       8 log.go:172] (0xc0003a7860) (3) Data frame sent
I0210 13:52:19.016813       8 log.go:172] (0xc00216b550) (0xc0003a7860) Stream removed, broadcasting: 3
I0210 13:52:19.016960       8 log.go:172] (0xc00216b550) Data frame received for 1
I0210 13:52:19.016977       8 log.go:172] (0xc002b87360) (1) Data frame handling
I0210 13:52:19.016988       8 log.go:172] (0xc00216b550) (0xc0003a7a40) Stream removed, broadcasting: 5
I0210 13:52:19.017061       8 log.go:172] (0xc002b87360) (1) Data frame sent
I0210 13:52:19.017077       8 log.go:172] (0xc00216b550) (0xc002b87360) Stream removed, broadcasting: 1
I0210 13:52:19.017115       8 log.go:172] (0xc00216b550) Go away received
I0210 13:52:19.017211       8 log.go:172] (0xc00216b550) (0xc002b87360) Stream removed, broadcasting: 1
I0210 13:52:19.017238       8 log.go:172] (0xc00216b550) (0xc0003a7860) Stream removed, broadcasting: 3
I0210 13:52:19.017244       8 log.go:172] (0xc00216b550) (0xc0003a7a40) Stream removed, broadcasting: 5
Feb 10 13:52:19.017: INFO: Waiting for endpoints: map[]
Feb 10 13:52:19.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1226 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:52:19.027: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:52:19.079472       8 log.go:172] (0xc000d21ad0) (0xc001f4e0a0) Create stream
I0210 13:52:19.079529       8 log.go:172] (0xc000d21ad0) (0xc001f4e0a0) Stream added, broadcasting: 1
I0210 13:52:19.084267       8 log.go:172] (0xc000d21ad0) Reply frame received for 1
I0210 13:52:19.084303       8 log.go:172] (0xc000d21ad0) (0xc002996c80) Create stream
I0210 13:52:19.084313       8 log.go:172] (0xc000d21ad0) (0xc002996c80) Stream added, broadcasting: 3
I0210 13:52:19.086757       8 log.go:172] (0xc000d21ad0) Reply frame received for 3
I0210 13:52:19.086923       8 log.go:172] (0xc000d21ad0) (0xc002996d20) Create stream
I0210 13:52:19.086949       8 log.go:172] (0xc000d21ad0) (0xc002996d20) Stream added, broadcasting: 5
I0210 13:52:19.088355       8 log.go:172] (0xc000d21ad0) Reply frame received for 5
I0210 13:52:19.191976       8 log.go:172] (0xc000d21ad0) Data frame received for 3
I0210 13:52:19.192131       8 log.go:172] (0xc002996c80) (3) Data frame handling
I0210 13:52:19.192161       8 log.go:172] (0xc002996c80) (3) Data frame sent
I0210 13:52:19.306699       8 log.go:172] (0xc000d21ad0) Data frame received for 1
I0210 13:52:19.307040       8 log.go:172] (0xc000d21ad0) (0xc002996c80) Stream removed, broadcasting: 3
I0210 13:52:19.307180       8 log.go:172] (0xc001f4e0a0) (1) Data frame handling
I0210 13:52:19.307419       8 log.go:172] (0xc001f4e0a0) (1) Data frame sent
I0210 13:52:19.307506       8 log.go:172] (0xc000d21ad0) (0xc002996d20) Stream removed, broadcasting: 5
I0210 13:52:19.307671       8 log.go:172] (0xc000d21ad0) (0xc001f4e0a0) Stream removed, broadcasting: 1
I0210 13:52:19.307925       8 log.go:172] (0xc000d21ad0) Go away received
I0210 13:52:19.307980       8 log.go:172] (0xc000d21ad0) (0xc001f4e0a0) Stream removed, broadcasting: 1
I0210 13:52:19.308071       8 log.go:172] (0xc000d21ad0) (0xc002996c80) Stream removed, broadcasting: 3
I0210 13:52:19.308086       8 log.go:172] (0xc000d21ad0) (0xc002996d20) Stream removed, broadcasting: 5
Feb 10 13:52:19.308: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:52:19.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1226" for this suite.
Feb 10 13:52:43.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:52:43.537: INFO: namespace pod-network-test-1226 deletion completed in 24.219564528s

• [SLOW TEST:55.397 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:52:43.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 10 13:52:43.810: INFO: Waiting up to 5m0s for pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831" in namespace "emptydir-3165" to be "success or failure"
Feb 10 13:52:43.820: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831": Phase="Pending", Reason="", readiness=false. Elapsed: 9.891322ms
Feb 10 13:52:45.830: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020416287s
Feb 10 13:52:47.844: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034708572s
Feb 10 13:52:49.863: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053556824s
Feb 10 13:52:51.873: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062999182s
Feb 10 13:52:53.883: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073212121s
STEP: Saw pod success
Feb 10 13:52:53.883: INFO: Pod "pod-da82c02b-6544-4ad8-ad5d-693dee915831" satisfied condition "success or failure"
Feb 10 13:52:53.890: INFO: Trying to get logs from node iruya-node pod pod-da82c02b-6544-4ad8-ad5d-693dee915831 container test-container: 
STEP: delete the pod
Feb 10 13:52:53.990: INFO: Waiting for pod pod-da82c02b-6544-4ad8-ad5d-693dee915831 to disappear
Feb 10 13:52:54.002: INFO: Pod pod-da82c02b-6544-4ad8-ad5d-693dee915831 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:52:54.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3165" for this suite.
Feb 10 13:53:00.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:53:00.232: INFO: namespace emptydir-3165 deletion completed in 6.159780111s

• [SLOW TEST:16.694 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:53:00.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 10 13:53:00.303: INFO: Waiting up to 5m0s for pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73" in namespace "emptydir-1207" to be "success or failure"
Feb 10 13:53:00.310: INFO: Pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73": Phase="Pending", Reason="", readiness=false. Elapsed: 7.203597ms
Feb 10 13:53:02.318: INFO: Pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01474264s
Feb 10 13:53:04.326: INFO: Pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023165817s
Feb 10 13:53:06.334: INFO: Pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031106329s
Feb 10 13:53:08.352: INFO: Pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048910715s
STEP: Saw pod success
Feb 10 13:53:08.352: INFO: Pod "pod-b15d8941-148e-4bec-abc9-7e360a9e7c73" satisfied condition "success or failure"
Feb 10 13:53:08.367: INFO: Trying to get logs from node iruya-node pod pod-b15d8941-148e-4bec-abc9-7e360a9e7c73 container test-container: 
STEP: delete the pod
Feb 10 13:53:08.505: INFO: Waiting for pod pod-b15d8941-148e-4bec-abc9-7e360a9e7c73 to disappear
Feb 10 13:53:08.511: INFO: Pod pod-b15d8941-148e-4bec-abc9-7e360a9e7c73 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:53:08.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1207" for this suite.
Feb 10 13:53:14.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:53:14.688: INFO: namespace emptydir-1207 deletion completed in 6.169049461s

• [SLOW TEST:14.456 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:53:14.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:53:14.761: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 10 13:53:18.550: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:53:19.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8289" for this suite.
Feb 10 13:53:29.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:53:30.069: INFO: namespace replication-controller-8289 deletion completed in 10.247731387s

• [SLOW TEST:15.380 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:53:30.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3a6463c7-c293-4b7c-a679-fad2530841ac
STEP: Creating a pod to test consume configMaps
Feb 10 13:53:30.235: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab" in namespace "projected-822" to be "success or failure"
Feb 10 13:53:30.261: INFO: Pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab": Phase="Pending", Reason="", readiness=false. Elapsed: 25.357543ms
Feb 10 13:53:32.269: INFO: Pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033810222s
Feb 10 13:53:34.280: INFO: Pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044350071s
Feb 10 13:53:36.287: INFO: Pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052098744s
Feb 10 13:53:38.299: INFO: Pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063903274s
STEP: Saw pod success
Feb 10 13:53:38.299: INFO: Pod "pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab" satisfied condition "success or failure"
Feb 10 13:53:38.303: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab container projected-configmap-volume-test: 
STEP: delete the pod
Feb 10 13:53:38.529: INFO: Waiting for pod pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab to disappear
Feb 10 13:53:38.571: INFO: Pod pod-projected-configmaps-5c73ecf6-f8b0-4771-859f-ccdb08d7feab no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:53:38.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-822" for this suite.
Feb 10 13:53:44.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:53:44.775: INFO: namespace projected-822 deletion completed in 6.192318988s

• [SLOW TEST:14.706 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:53:44.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 13:53:44.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5202'
Feb 10 13:53:47.068: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 10 13:53:47.068: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 10 13:53:47.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5202'
Feb 10 13:53:47.302: INFO: stderr: ""
Feb 10 13:53:47.302: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:53:47.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5202" for this suite.
Feb 10 13:54:09.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:54:09.452: INFO: namespace kubectl-5202 deletion completed in 22.143771827s

• [SLOW TEST:24.676 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:54:09.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-719
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 10 13:54:09.507: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 10 13:54:43.799: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-719 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:54:43.800: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:54:43.908042       8 log.go:172] (0xc0008c0210) (0xc00194c780) Create stream
I0210 13:54:43.908138       8 log.go:172] (0xc0008c0210) (0xc00194c780) Stream added, broadcasting: 1
I0210 13:54:43.936965       8 log.go:172] (0xc0008c0210) Reply frame received for 1
I0210 13:54:43.937075       8 log.go:172] (0xc0008c0210) (0xc00194c820) Create stream
I0210 13:54:43.937087       8 log.go:172] (0xc0008c0210) (0xc00194c820) Stream added, broadcasting: 3
I0210 13:54:43.942883       8 log.go:172] (0xc0008c0210) Reply frame received for 3
I0210 13:54:43.942922       8 log.go:172] (0xc0008c0210) (0xc0013c0000) Create stream
I0210 13:54:43.942935       8 log.go:172] (0xc0008c0210) (0xc0013c0000) Stream added, broadcasting: 5
I0210 13:54:43.946769       8 log.go:172] (0xc0008c0210) Reply frame received for 5
I0210 13:54:44.192784       8 log.go:172] (0xc0008c0210) Data frame received for 3
I0210 13:54:44.192853       8 log.go:172] (0xc00194c820) (3) Data frame handling
I0210 13:54:44.192890       8 log.go:172] (0xc00194c820) (3) Data frame sent
I0210 13:54:44.459179       8 log.go:172] (0xc0008c0210) Data frame received for 1
I0210 13:54:44.459248       8 log.go:172] (0xc00194c780) (1) Data frame handling
I0210 13:54:44.459275       8 log.go:172] (0xc00194c780) (1) Data frame sent
I0210 13:54:44.459300       8 log.go:172] (0xc0008c0210) (0xc00194c780) Stream removed, broadcasting: 1
I0210 13:54:44.461214       8 log.go:172] (0xc0008c0210) (0xc00194c820) Stream removed, broadcasting: 3
I0210 13:54:44.461257       8 log.go:172] (0xc0008c0210) (0xc0013c0000) Stream removed, broadcasting: 5
I0210 13:54:44.461343       8 log.go:172] (0xc0008c0210) (0xc00194c780) Stream removed, broadcasting: 1
I0210 13:54:44.461376       8 log.go:172] (0xc0008c0210) (0xc00194c820) Stream removed, broadcasting: 3
I0210 13:54:44.461405       8 log.go:172] (0xc0008c0210) Go away received
I0210 13:54:44.461455       8 log.go:172] (0xc0008c0210) (0xc0013c0000) Stream removed, broadcasting: 5
Feb 10 13:54:44.462: INFO: Waiting for endpoints: map[]
Feb 10 13:54:44.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-719 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 13:54:44.470: INFO: >>> kubeConfig: /root/.kube/config
I0210 13:54:44.543424       8 log.go:172] (0xc0008c1130) (0xc00194cf00) Create stream
I0210 13:54:44.543505       8 log.go:172] (0xc0008c1130) (0xc00194cf00) Stream added, broadcasting: 1
I0210 13:54:44.556410       8 log.go:172] (0xc0008c1130) Reply frame received for 1
I0210 13:54:44.556457       8 log.go:172] (0xc0008c1130) (0xc0013c0140) Create stream
I0210 13:54:44.556472       8 log.go:172] (0xc0008c1130) (0xc0013c0140) Stream added, broadcasting: 3
I0210 13:54:44.558497       8 log.go:172] (0xc0008c1130) Reply frame received for 3
I0210 13:54:44.558523       8 log.go:172] (0xc0008c1130) (0xc0029966e0) Create stream
I0210 13:54:44.558533       8 log.go:172] (0xc0008c1130) (0xc0029966e0) Stream added, broadcasting: 5
I0210 13:54:44.559914       8 log.go:172] (0xc0008c1130) Reply frame received for 5
I0210 13:54:44.782251       8 log.go:172] (0xc0008c1130) Data frame received for 3
I0210 13:54:44.782281       8 log.go:172] (0xc0013c0140) (3) Data frame handling
I0210 13:54:44.782305       8 log.go:172] (0xc0013c0140) (3) Data frame sent
I0210 13:54:44.963917       8 log.go:172] (0xc0008c1130) Data frame received for 1
I0210 13:54:44.964009       8 log.go:172] (0xc00194cf00) (1) Data frame handling
I0210 13:54:44.964093       8 log.go:172] (0xc00194cf00) (1) Data frame sent
I0210 13:54:44.964121       8 log.go:172] (0xc0008c1130) (0xc00194cf00) Stream removed, broadcasting: 1
I0210 13:54:44.964509       8 log.go:172] (0xc0008c1130) (0xc0029966e0) Stream removed, broadcasting: 5
I0210 13:54:44.964563       8 log.go:172] (0xc0008c1130) (0xc0013c0140) Stream removed, broadcasting: 3
I0210 13:54:44.964625       8 log.go:172] (0xc0008c1130) (0xc00194cf00) Stream removed, broadcasting: 1
I0210 13:54:44.964691       8 log.go:172] (0xc0008c1130) (0xc0013c0140) Stream removed, broadcasting: 3
I0210 13:54:44.964809       8 log.go:172] (0xc0008c1130) (0xc0029966e0) Stream removed, broadcasting: 5
Feb 10 13:54:44.965: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:54:44.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0210 13:54:44.966034       8 log.go:172] (0xc0008c1130) Go away received
STEP: Destroying namespace "pod-network-test-719" for this suite.
Feb 10 13:55:09.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:55:09.105: INFO: namespace pod-network-test-719 deletion completed in 24.131325268s

• [SLOW TEST:59.654 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:55:09.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 10 13:55:09.243: INFO: Waiting up to 5m0s for pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6" in namespace "emptydir-2531" to be "success or failure"
Feb 10 13:55:09.249: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240387ms
Feb 10 13:55:11.257: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014365835s
Feb 10 13:55:13.269: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025790185s
Feb 10 13:55:15.352: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109356803s
Feb 10 13:55:17.361: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117809088s
Feb 10 13:55:19.377: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133703937s
STEP: Saw pod success
Feb 10 13:55:19.377: INFO: Pod "pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6" satisfied condition "success or failure"
Feb 10 13:55:19.383: INFO: Trying to get logs from node iruya-node pod pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6 container test-container: 
STEP: delete the pod
Feb 10 13:55:19.465: INFO: Waiting for pod pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6 to disappear
Feb 10 13:55:19.470: INFO: Pod pod-020c3d4a-9b3a-4e52-9d56-b03da3d3e9d6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:55:19.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2531" for this suite.
Feb 10 13:55:25.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:55:25.632: INFO: namespace emptydir-2531 deletion completed in 6.155322915s

• [SLOW TEST:16.527 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:55:25.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:55:52.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1807" for this suite.
Feb 10 13:55:58.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:55:58.599: INFO: namespace namespaces-1807 deletion completed in 6.219908401s
STEP: Destroying namespace "nsdeletetest-1544" for this suite.
Feb 10 13:55:58.603: INFO: Namespace nsdeletetest-1544 was already deleted
STEP: Destroying namespace "nsdeletetest-551" for this suite.
Feb 10 13:56:04.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:56:04.768: INFO: namespace nsdeletetest-551 deletion completed in 6.164450445s

• [SLOW TEST:39.135 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:56:04.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 13:56:13.094: INFO: Waiting up to 5m0s for pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2" in namespace "pods-909" to be "success or failure"
Feb 10 13:56:13.118: INFO: Pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.085487ms
Feb 10 13:56:15.124: INFO: Pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029956972s
Feb 10 13:56:17.139: INFO: Pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044576726s
Feb 10 13:56:19.148: INFO: Pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053064829s
Feb 10 13:56:21.160: INFO: Pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06526475s
STEP: Saw pod success
Feb 10 13:56:21.160: INFO: Pod "client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2" satisfied condition "success or failure"
Feb 10 13:56:21.166: INFO: Trying to get logs from node iruya-node pod client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2 container env3cont: 
STEP: delete the pod
Feb 10 13:56:21.359: INFO: Waiting for pod client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2 to disappear
Feb 10 13:56:21.375: INFO: Pod client-envvars-fd156dbf-297b-453a-b041-b52391cbcad2 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:56:21.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-909" for this suite.
Feb 10 13:57:07.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:57:07.533: INFO: namespace pods-909 deletion completed in 46.150130579s

• [SLOW TEST:62.764 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:57:07.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-44c3b06f-c046-4757-b350-51896a51e53d
STEP: Creating configMap with name cm-test-opt-upd-cc211653-9303-490a-b5de-582c53b5f979
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-44c3b06f-c046-4757-b350-51896a51e53d
STEP: Updating configmap cm-test-opt-upd-cc211653-9303-490a-b5de-582c53b5f979
STEP: Creating configMap with name cm-test-opt-create-e74107ad-ab03-40ae-b018-65891537712e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:58:40.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8366" for this suite.
Feb 10 13:59:02.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:59:02.807: INFO: namespace projected-8366 deletion completed in 22.109376289s

• [SLOW TEST:115.274 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:59:02.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 10 13:59:02.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7528'
Feb 10 13:59:03.372: INFO: stderr: ""
Feb 10 13:59:03.372: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 10 13:59:03.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:03.583: INFO: stderr: ""
Feb 10 13:59:03.583: INFO: stdout: "update-demo-nautilus-42jxj update-demo-nautilus-7slws "
Feb 10 13:59:03.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:03.712: INFO: stderr: ""
Feb 10 13:59:03.712: INFO: stdout: ""
Feb 10 13:59:03.712: INFO: update-demo-nautilus-42jxj is created but not running
Feb 10 13:59:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:09.048: INFO: stderr: ""
Feb 10 13:59:09.048: INFO: stdout: "update-demo-nautilus-42jxj update-demo-nautilus-7slws "
Feb 10 13:59:09.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:09.672: INFO: stderr: ""
Feb 10 13:59:09.672: INFO: stdout: ""
Feb 10 13:59:09.672: INFO: update-demo-nautilus-42jxj is created but not running
Feb 10 13:59:14.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:14.784: INFO: stderr: ""
Feb 10 13:59:14.784: INFO: stdout: "update-demo-nautilus-42jxj update-demo-nautilus-7slws "
Feb 10 13:59:14.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:14.874: INFO: stderr: ""
Feb 10 13:59:14.874: INFO: stdout: "true"
Feb 10 13:59:14.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:15.012: INFO: stderr: ""
Feb 10 13:59:15.012: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:59:15.012: INFO: validating pod update-demo-nautilus-42jxj
Feb 10 13:59:15.019: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:59:15.019: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:59:15.019: INFO: update-demo-nautilus-42jxj is verified up and running
Feb 10 13:59:15.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7slws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:15.115: INFO: stderr: ""
Feb 10 13:59:15.115: INFO: stdout: "true"
Feb 10 13:59:15.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7slws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:15.590: INFO: stderr: ""
Feb 10 13:59:15.590: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:59:15.590: INFO: validating pod update-demo-nautilus-7slws
Feb 10 13:59:15.609: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:59:15.609: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:59:15.609: INFO: update-demo-nautilus-7slws is verified up and running
STEP: scaling down the replication controller
Feb 10 13:59:15.611: INFO: scanned /root for discovery docs: 
Feb 10 13:59:15.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7528'
Feb 10 13:59:17.329: INFO: stderr: ""
Feb 10 13:59:17.329: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 10 13:59:17.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:17.520: INFO: stderr: ""
Feb 10 13:59:17.520: INFO: stdout: "update-demo-nautilus-42jxj update-demo-nautilus-7slws "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 10 13:59:22.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:22.647: INFO: stderr: ""
Feb 10 13:59:22.647: INFO: stdout: "update-demo-nautilus-42jxj "
Feb 10 13:59:22.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:22.747: INFO: stderr: ""
Feb 10 13:59:22.747: INFO: stdout: "true"
Feb 10 13:59:22.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:22.838: INFO: stderr: ""
Feb 10 13:59:22.838: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:59:22.838: INFO: validating pod update-demo-nautilus-42jxj
Feb 10 13:59:22.865: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:59:22.866: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:59:22.866: INFO: update-demo-nautilus-42jxj is verified up and running
STEP: scaling up the replication controller
Feb 10 13:59:22.871: INFO: scanned /root for discovery docs: 
Feb 10 13:59:22.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7528'
Feb 10 13:59:24.091: INFO: stderr: ""
Feb 10 13:59:24.091: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 10 13:59:24.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:24.260: INFO: stderr: ""
Feb 10 13:59:24.260: INFO: stdout: "update-demo-nautilus-42jxj update-demo-nautilus-4nr49 "
Feb 10 13:59:24.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:24.344: INFO: stderr: ""
Feb 10 13:59:24.344: INFO: stdout: "true"
Feb 10 13:59:24.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:24.476: INFO: stderr: ""
Feb 10 13:59:24.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:59:24.476: INFO: validating pod update-demo-nautilus-42jxj
Feb 10 13:59:24.483: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:59:24.483: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:59:24.483: INFO: update-demo-nautilus-42jxj is verified up and running
Feb 10 13:59:24.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4nr49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:24.606: INFO: stderr: ""
Feb 10 13:59:24.606: INFO: stdout: ""
Feb 10 13:59:24.606: INFO: update-demo-nautilus-4nr49 is created but not running
Feb 10 13:59:29.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7528'
Feb 10 13:59:29.780: INFO: stderr: ""
Feb 10 13:59:29.780: INFO: stdout: "update-demo-nautilus-42jxj update-demo-nautilus-4nr49 "
Feb 10 13:59:29.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:29.905: INFO: stderr: ""
Feb 10 13:59:29.905: INFO: stdout: "true"
Feb 10 13:59:29.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42jxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:30.040: INFO: stderr: ""
Feb 10 13:59:30.040: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:59:30.040: INFO: validating pod update-demo-nautilus-42jxj
Feb 10 13:59:30.044: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:59:30.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:59:30.044: INFO: update-demo-nautilus-42jxj is verified up and running
Feb 10 13:59:30.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4nr49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:30.125: INFO: stderr: ""
Feb 10 13:59:30.125: INFO: stdout: "true"
Feb 10 13:59:30.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4nr49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7528'
Feb 10 13:59:30.229: INFO: stderr: ""
Feb 10 13:59:30.229: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 13:59:30.229: INFO: validating pod update-demo-nautilus-4nr49
Feb 10 13:59:30.245: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 13:59:30.245: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 13:59:30.245: INFO: update-demo-nautilus-4nr49 is verified up and running
STEP: using delete to clean up resources
Feb 10 13:59:30.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7528'
Feb 10 13:59:30.337: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 13:59:30.337: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 10 13:59:30.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7528'
Feb 10 13:59:30.465: INFO: stderr: "No resources found.\n"
Feb 10 13:59:30.465: INFO: stdout: ""
Feb 10 13:59:30.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7528 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 10 13:59:30.825: INFO: stderr: ""
Feb 10 13:59:30.825: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 13:59:30.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7528" for this suite.
Feb 10 13:59:54.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 13:59:55.022: INFO: namespace kubectl-7528 deletion completed in 24.186377702s

• [SLOW TEST:52.215 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 13:59:55.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 10 14:00:03.134: INFO: Pod pod-hostip-b4f65ecc-7bb1-4e2b-8207-a3d2160d5ab6 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:00:03.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9683" for this suite.
Feb 10 14:00:25.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:00:25.339: INFO: namespace pods-9683 deletion completed in 22.198006685s

• [SLOW TEST:30.317 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:00:25.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:00:25.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2436" for this suite.
Feb 10 14:00:31.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:00:32.248: INFO: namespace kubelet-test-2436 deletion completed in 6.652875788s

• [SLOW TEST:6.909 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:00:32.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 10 14:00:32.333: INFO: namespace kubectl-639
Feb 10 14:00:32.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-639'
Feb 10 14:00:32.710: INFO: stderr: ""
Feb 10 14:00:32.710: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 10 14:00:33.718: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:33.718: INFO: Found 0 / 1
Feb 10 14:00:34.716: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:34.716: INFO: Found 0 / 1
Feb 10 14:00:35.717: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:35.717: INFO: Found 0 / 1
Feb 10 14:00:36.718: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:36.719: INFO: Found 0 / 1
Feb 10 14:00:37.723: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:37.723: INFO: Found 0 / 1
Feb 10 14:00:38.726: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:38.726: INFO: Found 0 / 1
Feb 10 14:00:39.718: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:39.718: INFO: Found 1 / 1
Feb 10 14:00:39.718: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 10 14:00:39.722: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:00:39.723: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 10 14:00:39.723: INFO: wait on redis-master startup in kubectl-639 
Feb 10 14:00:39.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9th85 redis-master --namespace=kubectl-639'
Feb 10 14:00:39.915: INFO: stderr: ""
Feb 10 14:00:39.915: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Feb 14:00:38.838 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Feb 14:00:38.838 # Server started, Redis version 3.2.12\n1:M 10 Feb 14:00:38.839 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Feb 14:00:38.843 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 10 14:00:39.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-639'
Feb 10 14:00:40.857: INFO: stderr: ""
Feb 10 14:00:40.857: INFO: stdout: "service/rm2 exposed\n"
Feb 10 14:00:40.871: INFO: Service rm2 in namespace kubectl-639 found.
STEP: exposing service
Feb 10 14:00:42.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-639'
Feb 10 14:00:43.149: INFO: stderr: ""
Feb 10 14:00:43.149: INFO: stdout: "service/rm3 exposed\n"
Feb 10 14:00:43.154: INFO: Service rm3 in namespace kubectl-639 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:00:45.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-639" for this suite.
Feb 10 14:01:09.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:01:09.425: INFO: namespace kubectl-639 deletion completed in 24.258216072s

• [SLOW TEST:37.177 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:01:09.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6462
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 10 14:01:09.481: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 10 14:01:49.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6462 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 14:01:49.635: INFO: >>> kubeConfig: /root/.kube/config
I0210 14:01:49.748784       8 log.go:172] (0xc000d20d10) (0xc002304fa0) Create stream
I0210 14:01:49.748890       8 log.go:172] (0xc000d20d10) (0xc002304fa0) Stream added, broadcasting: 1
I0210 14:01:49.759320       8 log.go:172] (0xc000d20d10) Reply frame received for 1
I0210 14:01:49.759370       8 log.go:172] (0xc000d20d10) (0xc002305040) Create stream
I0210 14:01:49.759392       8 log.go:172] (0xc000d20d10) (0xc002305040) Stream added, broadcasting: 3
I0210 14:01:49.762946       8 log.go:172] (0xc000d20d10) Reply frame received for 3
I0210 14:01:49.763024       8 log.go:172] (0xc000d20d10) (0xc0023050e0) Create stream
I0210 14:01:49.763034       8 log.go:172] (0xc000d20d10) (0xc0023050e0) Stream added, broadcasting: 5
I0210 14:01:49.768172       8 log.go:172] (0xc000d20d10) Reply frame received for 5
I0210 14:01:49.936909       8 log.go:172] (0xc000d20d10) Data frame received for 3
I0210 14:01:49.936950       8 log.go:172] (0xc002305040) (3) Data frame handling
I0210 14:01:49.936968       8 log.go:172] (0xc002305040) (3) Data frame sent
I0210 14:01:50.112919       8 log.go:172] (0xc000d20d10) (0xc002305040) Stream removed, broadcasting: 3
I0210 14:01:50.113187       8 log.go:172] (0xc000d20d10) (0xc0023050e0) Stream removed, broadcasting: 5
I0210 14:01:50.113444       8 log.go:172] (0xc000d20d10) Data frame received for 1
I0210 14:01:50.113690       8 log.go:172] (0xc002304fa0) (1) Data frame handling
I0210 14:01:50.113851       8 log.go:172] (0xc002304fa0) (1) Data frame sent
I0210 14:01:50.113917       8 log.go:172] (0xc000d20d10) (0xc002304fa0) Stream removed, broadcasting: 1
I0210 14:01:50.114280       8 log.go:172] (0xc000d20d10) (0xc002304fa0) Stream removed, broadcasting: 1
I0210 14:01:50.114337       8 log.go:172] (0xc000d20d10) (0xc002305040) Stream removed, broadcasting: 3
I0210 14:01:50.114345       8 log.go:172] (0xc000d20d10) (0xc0023050e0) Stream removed, broadcasting: 5
Feb 10 14:01:50.115: INFO: Found all expected endpoints: [netserver-0]
Feb 10 14:01:50.130: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6462 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 14:01:50.130: INFO: >>> kubeConfig: /root/.kube/config
I0210 14:01:50.196480       8 log.go:172] (0xc0008c0b00) (0xc002257f40) Create stream
I0210 14:01:50.196625       8 log.go:172] (0xc0008c0b00) (0xc002257f40) Stream added, broadcasting: 1
I0210 14:01:50.207058       8 log.go:172] (0xc0008c0b00) Reply frame received for 1
I0210 14:01:50.207127       8 log.go:172] (0xc0008c0b00) (0xc001fe8460) Create stream
I0210 14:01:50.207141       8 log.go:172] (0xc0008c0b00) (0xc001fe8460) Stream added, broadcasting: 3
I0210 14:01:50.211794       8 log.go:172] (0xc0008c0b00) Reply frame received for 3
I0210 14:01:50.211823       8 log.go:172] (0xc0008c0b00) (0xc0029e5e00) Create stream
I0210 14:01:50.211832       8 log.go:172] (0xc0008c0b00) (0xc0029e5e00) Stream added, broadcasting: 5
I0210 14:01:50.213533       8 log.go:172] (0xc0008c0b00) Reply frame received for 5
I0210 14:01:50.318935       8 log.go:172] (0xc0008c0b00) Data frame received for 3
I0210 14:01:50.319019       8 log.go:172] (0xc001fe8460) (3) Data frame handling
I0210 14:01:50.319088       8 log.go:172] (0xc001fe8460) (3) Data frame sent
I0210 14:01:50.460923       8 log.go:172] (0xc0008c0b00) Data frame received for 1
I0210 14:01:50.461064       8 log.go:172] (0xc0008c0b00) (0xc001fe8460) Stream removed, broadcasting: 3
I0210 14:01:50.461193       8 log.go:172] (0xc002257f40) (1) Data frame handling
I0210 14:01:50.461242       8 log.go:172] (0xc002257f40) (1) Data frame sent
I0210 14:01:50.461259       8 log.go:172] (0xc0008c0b00) (0xc002257f40) Stream removed, broadcasting: 1
I0210 14:01:50.461625       8 log.go:172] (0xc0008c0b00) (0xc0029e5e00) Stream removed, broadcasting: 5
I0210 14:01:50.461686       8 log.go:172] (0xc0008c0b00) (0xc002257f40) Stream removed, broadcasting: 1
I0210 14:01:50.461701       8 log.go:172] (0xc0008c0b00) (0xc001fe8460) Stream removed, broadcasting: 3
I0210 14:01:50.461717       8 log.go:172] (0xc0008c0b00) (0xc0029e5e00) Stream removed, broadcasting: 5
I0210 14:01:50.462362       8 log.go:172] (0xc0008c0b00) Go away received
Feb 10 14:01:50.462: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:01:50.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6462" for this suite.
Feb 10 14:02:14.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:02:14.669: INFO: namespace pod-network-test-6462 deletion completed in 24.193676388s

• [SLOW TEST:65.244 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:02:14.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 10 14:02:14.786: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:02:30.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-187" for this suite.
Feb 10 14:02:52.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:02:53.063: INFO: namespace init-container-187 deletion completed in 22.146690571s

• [SLOW TEST:38.393 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:02:53.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-cp6pl in namespace proxy-4683
I0210 14:02:53.365752       8 runners.go:180] Created replication controller with name: proxy-service-cp6pl, namespace: proxy-4683, replica count: 1
I0210 14:02:54.417282       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:02:55.417762       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:02:56.418259       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:02:57.418572       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:02:58.418908       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:02:59.419618       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:03:00.420004       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:03:01.420524       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:03:02.421340       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:03:03.421881       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:03:04.422539       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0210 14:03:05.423841       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:06.424224       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:07.424647       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:08.424981       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:09.425333       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:10.425692       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:11.426077       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0210 14:03:12.426414       8 runners.go:180] proxy-service-cp6pl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 10 14:03:12.444: INFO: setup took 19.154447929s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 10 14:03:12.518: INFO: (0) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 73.034765ms)
Feb 10 14:03:12.518: INFO: (0) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 73.44897ms)
Feb 10 14:03:12.518: INFO: (0) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 73.283473ms)
Feb 10 14:03:12.519: INFO: (0) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 73.947761ms)
Feb 10 14:03:12.519: INFO: (0) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 73.790579ms)
Feb 10 14:03:12.519: INFO: (0) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 74.271622ms)
Feb 10 14:03:12.520: INFO: (0) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 74.339923ms)
Feb 10 14:03:12.520: INFO: (0) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 74.789571ms)
Feb 10 14:03:12.525: INFO: (0) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 80.332177ms)
Feb 10 14:03:12.526: INFO: (0) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 80.517768ms)
Feb 10 14:03:12.526: INFO: (0) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 80.697265ms)
Feb 10 14:03:12.536: INFO: (0) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 91.040692ms)
Feb 10 14:03:12.537: INFO: (0) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 91.264987ms)
Feb 10 14:03:12.537: INFO: (0) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 92.396508ms)
Feb 10 14:03:12.537: INFO: (0) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 92.250144ms)
Feb 10 14:03:12.541: INFO: (0) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 9.791698ms)
Feb 10 14:03:12.551: INFO: (1) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 9.708053ms)
Feb 10 14:03:12.557: INFO: (1) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 16.296591ms)
Feb 10 14:03:12.557: INFO: (1) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 16.175986ms)
Feb 10 14:03:12.558: INFO: (1) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 16.36397ms)
Feb 10 14:03:12.558: INFO: (1) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 16.862334ms)
Feb 10 14:03:12.559: INFO: (1) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 17.488265ms)
Feb 10 14:03:12.559: INFO: (1) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 18.116102ms)
Feb 10 14:03:12.560: INFO: (1) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 18.813057ms)
Feb 10 14:03:12.567: INFO: (1) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 26.203134ms)
Feb 10 14:03:12.568: INFO: (1) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 26.382812ms)
Feb 10 14:03:12.568: INFO: (1) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 26.787148ms)
Feb 10 14:03:12.568: INFO: (1) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 27.088086ms)
Feb 10 14:03:12.569: INFO: (1) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 27.481975ms)
Feb 10 14:03:12.569: INFO: (1) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 21.574306ms)
Feb 10 14:03:12.595: INFO: (2) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 24.29879ms)
Feb 10 14:03:12.595: INFO: (2) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 24.366855ms)
Feb 10 14:03:12.595: INFO: (2) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 25.063371ms)
Feb 10 14:03:12.595: INFO: (2) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 25.611965ms)
Feb 10 14:03:12.601: INFO: (2) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 7.48507ms)
Feb 10 14:03:12.619: INFO: (3) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 12.410903ms)
Feb 10 14:03:12.620: INFO: (3) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 12.637055ms)
Feb 10 14:03:12.620: INFO: (3) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 12.651474ms)
Feb 10 14:03:12.621: INFO: (3) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 13.599374ms)
Feb 10 14:03:12.621: INFO: (3) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 13.717664ms)
Feb 10 14:03:12.621: INFO: (3) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 13.837704ms)
Feb 10 14:03:12.622: INFO: (3) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 16.72803ms)
Feb 10 14:03:12.626: INFO: (3) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 18.641232ms)
Feb 10 14:03:12.626: INFO: (3) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 18.938381ms)
Feb 10 14:03:12.628: INFO: (3) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 21.018221ms)
Feb 10 14:03:12.628: INFO: (3) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 21.326678ms)
Feb 10 14:03:12.677: INFO: (4) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 48.257743ms)
Feb 10 14:03:12.677: INFO: (4) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 48.128672ms)
Feb 10 14:03:12.677: INFO: (4) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 47.791992ms)
Feb 10 14:03:12.677: INFO: (4) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 48.284217ms)
Feb 10 14:03:12.677: INFO: (4) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 48.524774ms)
Feb 10 14:03:12.687: INFO: (4) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 58.634937ms)
Feb 10 14:03:12.688: INFO: (4) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 58.708439ms)
Feb 10 14:03:12.688: INFO: (4) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 58.164133ms)
Feb 10 14:03:12.688: INFO: (4) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 59.3936ms)
Feb 10 14:03:12.688: INFO: (4) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 59.66513ms)
Feb 10 14:03:12.688: INFO: (4) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 59.732255ms)
Feb 10 14:03:12.689: INFO: (4) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 59.634298ms)
Feb 10 14:03:12.689: INFO: (4) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 26.598003ms)
Feb 10 14:03:12.745: INFO: (5) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 27.031349ms)
Feb 10 14:03:12.746: INFO: (5) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 28.267535ms)
Feb 10 14:03:12.746: INFO: (5) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 33.241338ms)
Feb 10 14:03:12.752: INFO: (5) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 33.635737ms)
Feb 10 14:03:12.767: INFO: (6) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 15.142839ms)
Feb 10 14:03:12.770: INFO: (6) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 17.493065ms)
Feb 10 14:03:12.770: INFO: (6) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 17.401979ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 18.65713ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 18.692361ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 18.672398ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 18.871789ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 18.808593ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 18.82868ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 18.929382ms)
Feb 10 14:03:12.771: INFO: (6) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 18.899853ms)
Feb 10 14:03:12.772: INFO: (6) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 19.385173ms)
Feb 10 14:03:12.772: INFO: (6) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 20.107309ms)
Feb 10 14:03:12.773: INFO: (6) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 20.453223ms)
Feb 10 14:03:12.775: INFO: (6) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 24.656775ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 24.546789ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 24.49519ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 24.627648ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 24.830832ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 24.728805ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 24.546238ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 24.678941ms)
Feb 10 14:03:12.801: INFO: (7) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 25.228886ms)
Feb 10 14:03:12.809: INFO: (8) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 8.048903ms)
Feb 10 14:03:12.810: INFO: (8) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test<... (200; 9.09196ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 9.250317ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 9.199969ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 9.222882ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 9.225176ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 9.311064ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 9.538127ms)
Feb 10 14:03:12.811: INFO: (8) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 9.733317ms)
Feb 10 14:03:12.814: INFO: (8) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 12.653771ms)
Feb 10 14:03:12.814: INFO: (8) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 12.762064ms)
Feb 10 14:03:12.814: INFO: (8) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 12.898847ms)
Feb 10 14:03:12.814: INFO: (8) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 13.005832ms)
Feb 10 14:03:12.814: INFO: (8) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 13.007399ms)
Feb 10 14:03:12.815: INFO: (8) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 13.138139ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 14.96904ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 15.066521ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 14.99508ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 14.981019ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 15.047957ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 15.121132ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 15.183605ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 15.209014ms)
Feb 10 14:03:12.830: INFO: (9) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 16.05737ms)
Feb 10 14:03:12.831: INFO: (9) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 16.052665ms)
Feb 10 14:03:12.832: INFO: (9) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 17.535342ms)
Feb 10 14:03:12.832: INFO: (9) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 17.642549ms)
Feb 10 14:03:12.833: INFO: (9) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 18.366385ms)
Feb 10 14:03:12.833: INFO: (9) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 18.784695ms)
Feb 10 14:03:12.835: INFO: (9) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 20.327373ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 17.572012ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 17.550358ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 17.463981ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 17.783965ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 17.672542ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 17.725648ms)
Feb 10 14:03:12.853: INFO: (10) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 17.749282ms)
Feb 10 14:03:12.857: INFO: (10) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 21.55369ms)
Feb 10 14:03:12.857: INFO: (10) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 21.518633ms)
Feb 10 14:03:12.857: INFO: (10) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 25.277857ms)
Feb 10 14:03:12.904: INFO: (11) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 25.232121ms)
Feb 10 14:03:12.904: INFO: (11) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 25.335702ms)
Feb 10 14:03:12.904: INFO: (11) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 25.393971ms)
Feb 10 14:03:12.908: INFO: (11) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 29.421494ms)
Feb 10 14:03:12.908: INFO: (11) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 29.47242ms)
Feb 10 14:03:12.908: INFO: (11) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 29.719857ms)
Feb 10 14:03:12.908: INFO: (11) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 29.729146ms)
Feb 10 14:03:12.908: INFO: (11) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 29.607445ms)
Feb 10 14:03:12.908: INFO: (11) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test<... (200; 20.01943ms)
Feb 10 14:03:12.940: INFO: (12) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 20.374002ms)
Feb 10 14:03:12.940: INFO: (12) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 20.637805ms)
Feb 10 14:03:12.940: INFO: (12) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 20.735636ms)
Feb 10 14:03:12.940: INFO: (12) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 27.482264ms)
Feb 10 14:03:12.947: INFO: (12) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 27.355024ms)
Feb 10 14:03:12.948: INFO: (12) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 27.878564ms)
Feb 10 14:03:12.948: INFO: (12) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 28.448712ms)
Feb 10 14:03:12.949: INFO: (12) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 28.561768ms)
Feb 10 14:03:12.960: INFO: (13) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 11.245442ms)
Feb 10 14:03:12.961: INFO: (13) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 11.681617ms)
Feb 10 14:03:12.961: INFO: (13) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 11.856633ms)
Feb 10 14:03:12.961: INFO: (13) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 11.640421ms)
Feb 10 14:03:12.961: INFO: (13) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 11.752743ms)
Feb 10 14:03:12.963: INFO: (13) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 14.245492ms)
Feb 10 14:03:12.964: INFO: (13) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 14.904298ms)
Feb 10 14:03:12.964: INFO: (13) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 15.489524ms)
Feb 10 14:03:12.965: INFO: (13) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 15.720951ms)
Feb 10 14:03:12.965: INFO: (13) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 16.017594ms)
Feb 10 14:03:12.965: INFO: (13) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 15.980146ms)
Feb 10 14:03:12.965: INFO: (13) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 16.226486ms)
Feb 10 14:03:12.979: INFO: (14) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 14.353861ms)
Feb 10 14:03:12.979: INFO: (14) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 14.481516ms)
Feb 10 14:03:12.980: INFO: (14) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 14.529568ms)
Feb 10 14:03:12.980: INFO: (14) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 14.559012ms)
Feb 10 14:03:12.980: INFO: (14) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 14.715056ms)
Feb 10 14:03:12.984: INFO: (14) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 18.905561ms)
Feb 10 14:03:12.984: INFO: (14) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 19.378367ms)
Feb 10 14:03:12.984: INFO: (14) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 19.31953ms)
Feb 10 14:03:12.985: INFO: (14) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test<... (200; 21.936803ms)
Feb 10 14:03:12.987: INFO: (14) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 22.094586ms)
Feb 10 14:03:12.987: INFO: (14) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 22.141876ms)
Feb 10 14:03:12.987: INFO: (14) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 22.075545ms)
Feb 10 14:03:12.987: INFO: (14) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 22.100184ms)
Feb 10 14:03:12.987: INFO: (14) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 22.069373ms)
Feb 10 14:03:12.987: INFO: (14) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 22.005944ms)
Feb 10 14:03:13.001: INFO: (15) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 13.672271ms)
Feb 10 14:03:13.004: INFO: (15) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 16.692775ms)
Feb 10 14:03:13.007: INFO: (15) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 18.901646ms)
Feb 10 14:03:13.007: INFO: (15) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 19.293967ms)
Feb 10 14:03:13.007: INFO: (15) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 19.468089ms)
Feb 10 14:03:13.007: INFO: (15) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 19.589766ms)
Feb 10 14:03:13.008: INFO: (15) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 20.422902ms)
Feb 10 14:03:13.009: INFO: (15) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: ... (200; 12.626636ms)
Feb 10 14:03:13.027: INFO: (16) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:1080/proxy/: test<... (200; 12.582978ms)
Feb 10 14:03:13.027: INFO: (16) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 12.499921ms)
Feb 10 14:03:13.028: INFO: (16) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 13.716383ms)
Feb 10 14:03:13.029: INFO: (16) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 13.801952ms)
Feb 10 14:03:13.029: INFO: (16) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 14.369934ms)
Feb 10 14:03:13.030: INFO: (16) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test (200; 11.593355ms)
Feb 10 14:03:13.047: INFO: (17) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 11.779555ms)
Feb 10 14:03:13.047: INFO: (17) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 11.839268ms)
Feb 10 14:03:13.047: INFO: (17) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 12.161509ms)
Feb 10 14:03:13.047: INFO: (17) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 12.200488ms)
Feb 10 14:03:13.050: INFO: (17) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test<... (200; 15.655771ms)
Feb 10 14:03:13.053: INFO: (17) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 18.585495ms)
Feb 10 14:03:13.053: INFO: (17) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 18.482244ms)
Feb 10 14:03:13.053: INFO: (17) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 18.632007ms)
Feb 10 14:03:13.053: INFO: (17) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 18.439242ms)
Feb 10 14:03:13.053: INFO: (17) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 18.65313ms)
Feb 10 14:03:13.054: INFO: (17) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 18.785958ms)
Feb 10 14:03:13.056: INFO: (17) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 21.17078ms)
Feb 10 14:03:13.065: INFO: (18) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 9.352082ms)
Feb 10 14:03:13.067: INFO: (18) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 10.491674ms)
Feb 10 14:03:13.071: INFO: (18) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 14.530297ms)
Feb 10 14:03:13.071: INFO: (18) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 14.81522ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 15.601428ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 15.755917ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 15.755197ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test<... (200; 15.830909ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 15.983024ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 15.913525ms)
Feb 10 14:03:13.072: INFO: (18) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 16.066209ms)
Feb 10 14:03:13.074: INFO: (18) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:162/proxy/: bar (200; 17.527825ms)
Feb 10 14:03:13.074: INFO: (18) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 18.051635ms)
Feb 10 14:03:13.074: INFO: (18) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 18.011629ms)
Feb 10 14:03:13.074: INFO: (18) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 18.079157ms)
Feb 10 14:03:13.082: INFO: (19) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb/proxy/: test (200; 7.23586ms)
Feb 10 14:03:13.082: INFO: (19) /api/v1/namespaces/proxy-4683/pods/http:proxy-service-cp6pl-klwwb:1080/proxy/: ... (200; 7.559553ms)
Feb 10 14:03:13.082: INFO: (19) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:443/proxy/: test<... (200; 8.548124ms)
Feb 10 14:03:13.083: INFO: (19) /api/v1/namespaces/proxy-4683/pods/proxy-service-cp6pl-klwwb:160/proxy/: foo (200; 8.762026ms)
Feb 10 14:03:13.083: INFO: (19) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:462/proxy/: tls qux (200; 8.893142ms)
Feb 10 14:03:13.083: INFO: (19) /api/v1/namespaces/proxy-4683/pods/https:proxy-service-cp6pl-klwwb:460/proxy/: tls baz (200; 8.792161ms)
Feb 10 14:03:13.084: INFO: (19) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname1/proxy/: foo (200; 9.713055ms)
Feb 10 14:03:13.084: INFO: (19) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname2/proxy/: tls qux (200; 9.907135ms)
Feb 10 14:03:13.085: INFO: (19) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname1/proxy/: foo (200; 10.827823ms)
Feb 10 14:03:13.085: INFO: (19) /api/v1/namespaces/proxy-4683/services/proxy-service-cp6pl:portname2/proxy/: bar (200; 10.873501ms)
Feb 10 14:03:13.086: INFO: (19) /api/v1/namespaces/proxy-4683/services/https:proxy-service-cp6pl:tlsportname1/proxy/: tls baz (200; 10.958152ms)
Feb 10 14:03:13.086: INFO: (19) /api/v1/namespaces/proxy-4683/services/http:proxy-service-cp6pl:portname2/proxy/: bar (200; 11.183252ms)
STEP: deleting ReplicationController proxy-service-cp6pl in namespace proxy-4683, will wait for the garbage collector to delete the pods
Feb 10 14:03:13.147: INFO: Deleting ReplicationController proxy-service-cp6pl took: 8.314117ms
Feb 10 14:03:13.447: INFO: Terminating ReplicationController proxy-service-cp6pl pods took: 300.592969ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:03:26.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4683" for this suite.
Feb 10 14:03:32.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:03:32.906: INFO: namespace proxy-4683 deletion completed in 6.246986204s

• [SLOW TEST:39.843 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:03:32.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 10 14:03:33.151: INFO: Waiting up to 5m0s for pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083" in namespace "downward-api-9883" to be "success or failure"
Feb 10 14:03:33.166: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083": Phase="Pending", Reason="", readiness=false. Elapsed: 15.310301ms
Feb 10 14:03:35.177: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026328611s
Feb 10 14:03:37.185: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033987181s
Feb 10 14:03:39.232: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081427797s
Feb 10 14:03:41.249: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09784566s
Feb 10 14:03:43.258: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107216058s
STEP: Saw pod success
Feb 10 14:03:43.258: INFO: Pod "downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083" satisfied condition "success or failure"
Feb 10 14:03:43.263: INFO: Trying to get logs from node iruya-node pod downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083 container dapi-container: 
STEP: delete the pod
Feb 10 14:03:43.519: INFO: Waiting for pod downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083 to disappear
Feb 10 14:03:43.531: INFO: Pod downward-api-2afac6ee-2320-42c8-bab1-83c1ad8c4083 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:03:43.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9883" for this suite.
Feb 10 14:03:49.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:03:49.865: INFO: namespace downward-api-9883 deletion completed in 6.219784216s

• [SLOW TEST:16.957 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:03:49.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-3485/secret-test-59c7940c-4544-4db6-a4f2-c9b3f1fc3e89
STEP: Creating a pod to test consume secrets
Feb 10 14:03:49.990: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55" in namespace "secrets-3485" to be "success or failure"
Feb 10 14:03:50.005: INFO: Pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 14.484156ms
Feb 10 14:03:52.013: INFO: Pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022916044s
Feb 10 14:03:54.025: INFO: Pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034891746s
Feb 10 14:03:56.032: INFO: Pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042185673s
Feb 10 14:03:58.041: INFO: Pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051085156s
STEP: Saw pod success
Feb 10 14:03:58.041: INFO: Pod "pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55" satisfied condition "success or failure"
Feb 10 14:03:58.045: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55 container env-test: 
STEP: delete the pod
Feb 10 14:03:58.096: INFO: Waiting for pod pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55 to disappear
Feb 10 14:03:58.099: INFO: Pod pod-configmaps-3f1db6c8-deaa-47b7-b8fd-44db8f2b0d55 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:03:58.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3485" for this suite.
Feb 10 14:04:04.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:04:04.255: INFO: namespace secrets-3485 deletion completed in 6.150950513s

• [SLOW TEST:14.388 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:04:04.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 10 14:04:04.389: INFO: Waiting up to 5m0s for pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729" in namespace "downward-api-5936" to be "success or failure"
Feb 10 14:04:04.396: INFO: Pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729": Phase="Pending", Reason="", readiness=false. Elapsed: 6.914246ms
Feb 10 14:04:06.415: INFO: Pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02579094s
Feb 10 14:04:08.425: INFO: Pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035537008s
Feb 10 14:04:10.433: INFO: Pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043812389s
Feb 10 14:04:12.439: INFO: Pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050001095s
STEP: Saw pod success
Feb 10 14:04:12.440: INFO: Pod "downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729" satisfied condition "success or failure"
Feb 10 14:04:12.445: INFO: Trying to get logs from node iruya-node pod downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729 container dapi-container: 
STEP: delete the pod
Feb 10 14:04:12.532: INFO: Waiting for pod downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729 to disappear
Feb 10 14:04:12.548: INFO: Pod downward-api-4abc892e-b0b6-4f6a-8dff-1e09d4755729 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:04:12.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5936" for this suite.
Feb 10 14:04:18.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:04:18.729: INFO: namespace downward-api-5936 deletion completed in 6.17565031s

• [SLOW TEST:14.473 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:04:18.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 10 14:04:18.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1655'
Feb 10 14:04:21.916: INFO: stderr: ""
Feb 10 14:04:21.916: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 10 14:04:21.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1655'
Feb 10 14:04:22.181: INFO: stderr: ""
Feb 10 14:04:22.182: INFO: stdout: "update-demo-nautilus-kjl69 update-demo-nautilus-zwkjn "
Feb 10 14:04:22.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjl69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1655'
Feb 10 14:04:22.328: INFO: stderr: ""
Feb 10 14:04:22.328: INFO: stdout: ""
Feb 10 14:04:22.328: INFO: update-demo-nautilus-kjl69 is created but not running
Feb 10 14:04:27.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1655'
Feb 10 14:04:28.831: INFO: stderr: ""
Feb 10 14:04:28.831: INFO: stdout: "update-demo-nautilus-kjl69 update-demo-nautilus-zwkjn "
Feb 10 14:04:28.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjl69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1655'
Feb 10 14:04:29.504: INFO: stderr: ""
Feb 10 14:04:29.504: INFO: stdout: ""
Feb 10 14:04:29.504: INFO: update-demo-nautilus-kjl69 is created but not running
Feb 10 14:04:34.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1655'
Feb 10 14:04:34.668: INFO: stderr: ""
Feb 10 14:04:34.668: INFO: stdout: "update-demo-nautilus-kjl69 update-demo-nautilus-zwkjn "
Feb 10 14:04:34.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjl69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1655'
Feb 10 14:04:34.778: INFO: stderr: ""
Feb 10 14:04:34.778: INFO: stdout: "true"
Feb 10 14:04:34.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjl69 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1655'
Feb 10 14:04:34.863: INFO: stderr: ""
Feb 10 14:04:34.863: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 14:04:34.863: INFO: validating pod update-demo-nautilus-kjl69
Feb 10 14:04:34.885: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 14:04:34.885: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 14:04:34.885: INFO: update-demo-nautilus-kjl69 is verified up and running
Feb 10 14:04:34.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zwkjn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1655'
Feb 10 14:04:34.992: INFO: stderr: ""
Feb 10 14:04:34.992: INFO: stdout: "true"
Feb 10 14:04:34.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zwkjn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1655'
Feb 10 14:04:35.095: INFO: stderr: ""
Feb 10 14:04:35.095: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 10 14:04:35.095: INFO: validating pod update-demo-nautilus-zwkjn
Feb 10 14:04:35.100: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 10 14:04:35.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 10 14:04:35.100: INFO: update-demo-nautilus-zwkjn is verified up and running
STEP: using delete to clean up resources
Feb 10 14:04:35.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1655'
Feb 10 14:04:35.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:04:35.291: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 10 14:04:35.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1655'
Feb 10 14:04:35.395: INFO: stderr: "No resources found.\n"
Feb 10 14:04:35.395: INFO: stdout: ""
Feb 10 14:04:35.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1655 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 10 14:04:35.518: INFO: stderr: ""
Feb 10 14:04:35.518: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:04:35.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1655" for this suite.
Feb 10 14:04:57.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:04:57.683: INFO: namespace kubectl-1655 deletion completed in 22.160627664s

• [SLOW TEST:38.955 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:04:57.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 10 14:04:57.864: INFO: Waiting up to 5m0s for pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9" in namespace "emptydir-7113" to be "success or failure"
Feb 10 14:04:57.883: INFO: Pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.614214ms
Feb 10 14:04:59.978: INFO: Pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114046487s
Feb 10 14:05:01.986: INFO: Pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122587858s
Feb 10 14:05:04.025: INFO: Pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160820371s
Feb 10 14:05:06.033: INFO: Pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.169291071s
STEP: Saw pod success
Feb 10 14:05:06.033: INFO: Pod "pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9" satisfied condition "success or failure"
Feb 10 14:05:06.038: INFO: Trying to get logs from node iruya-node pod pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9 container test-container: 
STEP: delete the pod
Feb 10 14:05:06.115: INFO: Waiting for pod pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9 to disappear
Feb 10 14:05:06.197: INFO: Pod pod-c63ad38a-e225-4c92-8d13-b9eefcbaa3c9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:05:06.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7113" for this suite.
Feb 10 14:05:12.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:05:12.367: INFO: namespace emptydir-7113 deletion completed in 6.165434421s

• [SLOW TEST:14.683 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:05:12.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0210 14:05:15.743788       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 10 14:05:15.743: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:05:15.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8464" for this suite.
Feb 10 14:05:22.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:05:22.274: INFO: namespace gc-8464 deletion completed in 6.523683433s

• [SLOW TEST:9.906 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:05:22.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:05:22.402: INFO: Create a RollingUpdate DaemonSet
Feb 10 14:05:22.410: INFO: Check that daemon pods launch on every node of the cluster
Feb 10 14:05:22.421: INFO: Number of nodes with available pods: 0
Feb 10 14:05:22.421: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:24.111: INFO: Number of nodes with available pods: 0
Feb 10 14:05:24.111: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:24.619: INFO: Number of nodes with available pods: 0
Feb 10 14:05:24.619: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:25.524: INFO: Number of nodes with available pods: 0
Feb 10 14:05:25.524: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:26.478: INFO: Number of nodes with available pods: 0
Feb 10 14:05:26.478: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:27.480: INFO: Number of nodes with available pods: 0
Feb 10 14:05:27.480: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:29.614: INFO: Number of nodes with available pods: 0
Feb 10 14:05:29.614: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:30.435: INFO: Number of nodes with available pods: 0
Feb 10 14:05:30.436: INFO: Node iruya-node is running more than one daemon pod
Feb 10 14:05:31.433: INFO: Number of nodes with available pods: 1
Feb 10 14:05:31.433: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 10 14:05:32.439: INFO: Number of nodes with available pods: 2
Feb 10 14:05:32.439: INFO: Number of running nodes: 2, number of available pods: 2
Feb 10 14:05:32.439: INFO: Update the DaemonSet to trigger a rollout
Feb 10 14:05:32.450: INFO: Updating DaemonSet daemon-set
Feb 10 14:05:39.477: INFO: Roll back the DaemonSet before rollout is complete
Feb 10 14:05:39.487: INFO: Updating DaemonSet daemon-set
Feb 10 14:05:39.487: INFO: Make sure DaemonSet rollback is complete
Feb 10 14:05:39.514: INFO: Wrong image for pod: daemon-set-8cbsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 10 14:05:39.514: INFO: Pod daemon-set-8cbsp is not available
Feb 10 14:05:40.539: INFO: Wrong image for pod: daemon-set-8cbsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 10 14:05:40.539: INFO: Pod daemon-set-8cbsp is not available
Feb 10 14:05:41.532: INFO: Wrong image for pod: daemon-set-8cbsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 10 14:05:41.532: INFO: Pod daemon-set-8cbsp is not available
Feb 10 14:05:42.538: INFO: Wrong image for pod: daemon-set-8cbsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 10 14:05:42.538: INFO: Pod daemon-set-8cbsp is not available
Feb 10 14:05:43.583: INFO: Wrong image for pod: daemon-set-8cbsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 10 14:05:43.583: INFO: Pod daemon-set-8cbsp is not available
Feb 10 14:05:44.538: INFO: Pod daemon-set-mdp5d is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4475, will wait for the garbage collector to delete the pods
Feb 10 14:05:44.629: INFO: Deleting DaemonSet.extensions daemon-set took: 12.840512ms
Feb 10 14:05:44.930: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.819462ms
Feb 10 14:05:50.375: INFO: Number of nodes with available pods: 0
Feb 10 14:05:50.375: INFO: Number of running nodes: 0, number of available pods: 0
Feb 10 14:05:50.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4475/daemonsets","resourceVersion":"23827860"},"items":null}

Feb 10 14:05:50.397: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4475/pods","resourceVersion":"23827861"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:05:50.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4475" for this suite.
Feb 10 14:05:56.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:05:56.596: INFO: namespace daemonsets-4475 deletion completed in 6.183376458s

• [SLOW TEST:34.322 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:05:56.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 10 14:05:56.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 10 14:05:56.677: INFO: Waiting for terminating namespaces to be deleted...
Feb 10 14:05:56.680: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 10 14:05:56.693: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.693: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 10 14:05:56.693: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 10 14:05:56.693: INFO: 	Container weave ready: true, restart count 0
Feb 10 14:05:56.693: INFO: 	Container weave-npc ready: true, restart count 0
Feb 10 14:05:56.693: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 10 14:05:56.706: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container etcd ready: true, restart count 0
Feb 10 14:05:56.706: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container weave ready: true, restart count 0
Feb 10 14:05:56.706: INFO: 	Container weave-npc ready: true, restart count 0
Feb 10 14:05:56.706: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container coredns ready: true, restart count 0
Feb 10 14:05:56.706: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb 10 14:05:56.706: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 10 14:05:56.706: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 10 14:05:56.706: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 10 14:05:56.706: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 10 14:05:56.706: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8b3d29ef-027d-40a9-be19-211527a8e65e 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8b3d29ef-027d-40a9-be19-211527a8e65e off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8b3d29ef-027d-40a9-be19-211527a8e65e
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:06:13.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4739" for this suite.
Feb 10 14:06:29.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:06:29.148: INFO: namespace sched-pred-4739 deletion completed in 16.136892188s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.551 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:06:29.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-286b9b23-84ea-4802-8ff9-b6beb4d374bc
STEP: Creating a pod to test consume secrets
Feb 10 14:06:29.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8" in namespace "projected-3032" to be "success or failure"
Feb 10 14:06:29.303: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.678081ms
Feb 10 14:06:31.311: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043044428s
Feb 10 14:06:33.320: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051381011s
Feb 10 14:06:35.328: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059888386s
Feb 10 14:06:37.340: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071842864s
Feb 10 14:06:39.350: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081210482s
STEP: Saw pod success
Feb 10 14:06:39.350: INFO: Pod "pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8" satisfied condition "success or failure"
Feb 10 14:06:39.354: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8 container projected-secret-volume-test: 
STEP: delete the pod
Feb 10 14:06:39.443: INFO: Waiting for pod pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8 to disappear
Feb 10 14:06:39.448: INFO: Pod pod-projected-secrets-8d8bd3ce-e4d5-48f5-aaa8-2be983f5beb8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:06:39.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3032" for this suite.
Feb 10 14:06:45.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:06:45.683: INFO: namespace projected-3032 deletion completed in 6.22964583s

• [SLOW TEST:16.535 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:06:45.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-729b608f-4863-4182-9f6d-6c9b48357a0e
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:06:58.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-108" for this suite.
Feb 10 14:07:20.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:07:20.128: INFO: namespace configmap-108 deletion completed in 22.116738272s

• [SLOW TEST:34.444 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:07:20.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:07:20.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947" in namespace "downward-api-497" to be "success or failure"
Feb 10 14:07:20.278: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947": Phase="Pending", Reason="", readiness=false. Elapsed: 28.056142ms
Feb 10 14:07:22.290: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040436061s
Feb 10 14:07:24.297: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046886755s
Feb 10 14:07:26.316: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066251709s
Feb 10 14:07:28.587: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947": Phase="Pending", Reason="", readiness=false. Elapsed: 8.337619934s
Feb 10 14:07:30.633: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.383416615s
STEP: Saw pod success
Feb 10 14:07:30.633: INFO: Pod "downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947" satisfied condition "success or failure"
Feb 10 14:07:30.702: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947 container client-container: 
STEP: delete the pod
Feb 10 14:07:30.859: INFO: Waiting for pod downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947 to disappear
Feb 10 14:07:30.869: INFO: Pod downwardapi-volume-7c0c50dc-9ad6-46a5-af9b-8ff26c508947 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:07:30.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-497" for this suite.
Feb 10 14:07:36.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:07:37.067: INFO: namespace downward-api-497 deletion completed in 6.189899996s

• [SLOW TEST:16.939 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:07:37.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9498/configmap-test-c65bc1a1-faf4-44e1-a207-ba19967d0dc1
STEP: Creating a pod to test consume configMaps
Feb 10 14:07:37.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91" in namespace "configmap-9498" to be "success or failure"
Feb 10 14:07:37.175: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135153ms
Feb 10 14:07:39.183: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016750117s
Feb 10 14:07:41.201: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034185112s
Feb 10 14:07:43.213: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046313645s
Feb 10 14:07:45.218: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051856939s
Feb 10 14:07:47.228: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061968721s
STEP: Saw pod success
Feb 10 14:07:47.228: INFO: Pod "pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91" satisfied condition "success or failure"
Feb 10 14:07:47.236: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91 container env-test: 
STEP: delete the pod
Feb 10 14:07:47.454: INFO: Waiting for pod pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91 to disappear
Feb 10 14:07:47.470: INFO: Pod pod-configmaps-0eebc89a-ef3b-4adc-9a11-f63ba5fcdd91 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:07:47.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9498" for this suite.
Feb 10 14:07:53.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:07:53.768: INFO: namespace configmap-9498 deletion completed in 6.290972398s

• [SLOW TEST:16.700 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:07:53.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:07:54.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b" in namespace "projected-2060" to be "success or failure"
Feb 10 14:07:54.099: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.260216ms
Feb 10 14:07:56.111: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05417921s
Feb 10 14:07:58.121: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06364477s
Feb 10 14:08:00.136: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078670153s
Feb 10 14:08:02.154: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097333232s
Feb 10 14:08:04.166: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108602773s
STEP: Saw pod success
Feb 10 14:08:04.166: INFO: Pod "downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b" satisfied condition "success or failure"
Feb 10 14:08:04.170: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b container client-container: 
STEP: delete the pod
Feb 10 14:08:04.404: INFO: Waiting for pod downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b to disappear
Feb 10 14:08:04.410: INFO: Pod downwardapi-volume-088856de-c3a2-4078-a906-cb7a3592307b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:08:04.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2060" for this suite.
Feb 10 14:08:10.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:08:10.576: INFO: namespace projected-2060 deletion completed in 6.159837619s

• [SLOW TEST:16.808 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:08:10.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:08:10.711: INFO: Waiting up to 5m0s for pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e" in namespace "downward-api-6181" to be "success or failure"
Feb 10 14:08:10.878: INFO: Pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 166.673475ms
Feb 10 14:08:12.891: INFO: Pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179703031s
Feb 10 14:08:14.905: INFO: Pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193777204s
Feb 10 14:08:16.914: INFO: Pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202935198s
Feb 10 14:08:18.924: INFO: Pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.21236082s
STEP: Saw pod success
Feb 10 14:08:18.924: INFO: Pod "downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e" satisfied condition "success or failure"
Feb 10 14:08:18.927: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e container client-container: 
STEP: delete the pod
Feb 10 14:08:19.019: INFO: Waiting for pod downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e to disappear
Feb 10 14:08:19.052: INFO: Pod downwardapi-volume-819200e5-8db0-4b0e-a1d5-7d1393930c9e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:08:19.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6181" for this suite.
Feb 10 14:08:25.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:08:25.268: INFO: namespace downward-api-6181 deletion completed in 6.19860726s

• [SLOW TEST:14.691 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:08:25.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:08:25.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b" in namespace "projected-3488" to be "success or failure"
Feb 10 14:08:25.380: INFO: Pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.876308ms
Feb 10 14:08:27.389: INFO: Pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051796439s
Feb 10 14:08:29.402: INFO: Pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064914722s
Feb 10 14:08:31.418: INFO: Pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0810324s
Feb 10 14:08:33.424: INFO: Pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087756933s
STEP: Saw pod success
Feb 10 14:08:33.425: INFO: Pod "downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b" satisfied condition "success or failure"
Feb 10 14:08:33.428: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b container client-container: 
STEP: delete the pod
Feb 10 14:08:33.514: INFO: Waiting for pod downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b to disappear
Feb 10 14:08:33.523: INFO: Pod downwardapi-volume-b06a572f-ac77-4c7c-8b56-a1f38addf30b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:08:33.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3488" for this suite.
Feb 10 14:08:39.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:08:39.874: INFO: namespace projected-3488 deletion completed in 6.295438765s

• [SLOW TEST:14.606 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:08:39.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 10 14:08:39.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7671'
Feb 10 14:08:40.476: INFO: stderr: ""
Feb 10 14:08:40.477: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 10 14:08:41.487: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:41.487: INFO: Found 0 / 1
Feb 10 14:08:42.495: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:42.495: INFO: Found 0 / 1
Feb 10 14:08:43.493: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:43.493: INFO: Found 0 / 1
Feb 10 14:08:44.491: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:44.491: INFO: Found 0 / 1
Feb 10 14:08:45.485: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:45.485: INFO: Found 0 / 1
Feb 10 14:08:46.493: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:46.493: INFO: Found 0 / 1
Feb 10 14:08:47.524: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:47.524: INFO: Found 1 / 1
Feb 10 14:08:47.524: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 10 14:08:47.530: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 14:08:47.530: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 10 14:08:47.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-869tg redis-master --namespace=kubectl-7671'
Feb 10 14:08:47.738: INFO: stderr: ""
Feb 10 14:08:47.738: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Feb 14:08:46.958 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Feb 14:08:46.958 # Server started, Redis version 3.2.12\n1:M 10 Feb 14:08:46.959 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Feb 14:08:46.959 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 10 14:08:47.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-869tg redis-master --namespace=kubectl-7671 --tail=1'
Feb 10 14:08:47.909: INFO: stderr: ""
Feb 10 14:08:47.909: INFO: stdout: "1:M 10 Feb 14:08:46.959 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 10 14:08:47.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-869tg redis-master --namespace=kubectl-7671 --limit-bytes=1'
Feb 10 14:08:48.097: INFO: stderr: ""
Feb 10 14:08:48.098: INFO: stdout: " "
STEP: exposing timestamps
Feb 10 14:08:48.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-869tg redis-master --namespace=kubectl-7671 --tail=1 --timestamps'
Feb 10 14:08:48.223: INFO: stderr: ""
Feb 10 14:08:48.223: INFO: stdout: "2020-02-10T14:08:46.962138664Z 1:M 10 Feb 14:08:46.959 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 10 14:08:50.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-869tg redis-master --namespace=kubectl-7671 --since=1s'
Feb 10 14:08:50.903: INFO: stderr: ""
Feb 10 14:08:50.903: INFO: stdout: ""
Feb 10 14:08:50.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-869tg redis-master --namespace=kubectl-7671 --since=24h'
Feb 10 14:08:51.105: INFO: stderr: ""
Feb 10 14:08:51.105: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Feb 14:08:46.958 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Feb 14:08:46.958 # Server started, Redis version 3.2.12\n1:M 10 Feb 14:08:46.959 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Feb 14:08:46.959 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 10 14:08:51.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7671'
Feb 10 14:08:51.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:08:51.218: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 10 14:08:51.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7671'
Feb 10 14:08:51.329: INFO: stderr: "No resources found.\n"
Feb 10 14:08:51.329: INFO: stdout: ""
Feb 10 14:08:51.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7671 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 10 14:08:51.478: INFO: stderr: ""
Feb 10 14:08:51.478: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:08:51.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7671" for this suite.
Feb 10 14:09:13.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:09:13.730: INFO: namespace kubectl-7671 deletion completed in 22.1391143s

• [SLOW TEST:33.855 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:09:13.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0210 14:09:44.466234       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 10 14:09:44.466: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:09:44.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8600" for this suite.
Feb 10 14:09:51.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:09:52.230: INFO: namespace gc-8600 deletion completed in 7.757301024s

• [SLOW TEST:38.499 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:09:52.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 10 14:09:52.508: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5392,SelfLink:/api/v1/namespaces/watch-5392/configmaps/e2e-watch-test-resource-version,UID:d4887fd2-cda4-41ba-b582-05fa0c18c364,ResourceVersion:23828513,Generation:0,CreationTimestamp:2020-02-10 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 10 14:09:52.509: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5392,SelfLink:/api/v1/namespaces/watch-5392/configmaps/e2e-watch-test-resource-version,UID:d4887fd2-cda4-41ba-b582-05fa0c18c364,ResourceVersion:23828514,Generation:0,CreationTimestamp:2020-02-10 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:09:52.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5392" for this suite.
Feb 10 14:09:58.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:09:58.719: INFO: namespace watch-5392 deletion completed in 6.20540187s

• [SLOW TEST:6.487 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:09:58.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5999
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5999
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5999
Feb 10 14:09:59.060: INFO: Found 0 stateful pods, waiting for 1
Feb 10 14:10:09.082: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 10 14:10:09.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 14:10:09.719: INFO: stderr: "I0210 14:10:09.369257    1913 log.go:172] (0xc00013ae70) (0xc00041c6e0) Create stream\nI0210 14:10:09.369483    1913 log.go:172] (0xc00013ae70) (0xc00041c6e0) Stream added, broadcasting: 1\nI0210 14:10:09.390894    1913 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0210 14:10:09.390950    1913 log.go:172] (0xc00013ae70) (0xc00065c1e0) Create stream\nI0210 14:10:09.390958    1913 log.go:172] (0xc00013ae70) (0xc00065c1e0) Stream added, broadcasting: 3\nI0210 14:10:09.393698    1913 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0210 14:10:09.393743    1913 log.go:172] (0xc00013ae70) (0xc00041c000) Create stream\nI0210 14:10:09.393762    1913 log.go:172] (0xc00013ae70) (0xc00041c000) Stream added, broadcasting: 5\nI0210 14:10:09.395710    1913 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0210 14:10:09.525738    1913 log.go:172] (0xc00013ae70) Data frame received for 5\nI0210 14:10:09.525770    1913 log.go:172] (0xc00041c000) (5) Data frame handling\nI0210 14:10:09.525787    1913 log.go:172] (0xc00041c000) (5) Data frame sent\n+ I0210 14:10:09.526776    1913 log.go:172] (0xc00013ae70) Data frame received for 5\nI0210 14:10:09.526795    1913 log.go:172] (0xc00041c000) (5) Data frame handling\nI0210 14:10:09.526816    1913 log.go:172] (0xc00041c000) (5) Data frame sent\nmv -v /usr/share/nginx/html/index.html /tmp/\nI0210 14:10:09.557563    1913 log.go:172] (0xc00013ae70) Data frame received for 3\nI0210 14:10:09.557600    1913 log.go:172] (0xc00065c1e0) (3) Data frame handling\nI0210 14:10:09.557622    1913 log.go:172] (0xc00065c1e0) (3) Data frame sent\nI0210 14:10:09.709189    1913 log.go:172] (0xc00013ae70) Data frame received for 1\nI0210 14:10:09.709332    1913 log.go:172] (0xc00013ae70) (0xc00041c000) Stream removed, broadcasting: 5\nI0210 14:10:09.709471    1913 log.go:172] (0xc00013ae70) (0xc00065c1e0) Stream removed, broadcasting: 3\nI0210 14:10:09.709601    1913 log.go:172] (0xc00041c6e0) (1) Data frame handling\nI0210 14:10:09.709637    1913 log.go:172] (0xc00041c6e0) (1) Data frame sent\nI0210 14:10:09.709660    1913 log.go:172] (0xc00013ae70) (0xc00041c6e0) Stream removed, broadcasting: 1\nI0210 14:10:09.709679    1913 log.go:172] (0xc00013ae70) Go away received\nI0210 14:10:09.710591    1913 log.go:172] (0xc00013ae70) (0xc00041c6e0) Stream removed, broadcasting: 1\nI0210 14:10:09.710622    1913 log.go:172] (0xc00013ae70) (0xc00065c1e0) Stream removed, broadcasting: 3\nI0210 14:10:09.710644    1913 log.go:172] (0xc00013ae70) (0xc00041c000) Stream removed, broadcasting: 5\n"
Feb 10 14:10:09.720: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 14:10:09.720: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 14:10:09.728: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 10 14:10:19.762: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 14:10:19.762: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 14:10:19.811: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 10 14:10:19.811: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:19.811: INFO: 
Feb 10 14:10:19.811: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 10 14:10:21.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974961011s
Feb 10 14:10:23.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.904442355s
Feb 10 14:10:24.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.559567218s
Feb 10 14:10:25.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.522821235s
Feb 10 14:10:26.782: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.513155017s
Feb 10 14:10:28.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.003361153s
Feb 10 14:10:29.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 785.05087ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5999
Feb 10 14:10:30.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:10:30.936: INFO: stderr: "I0210 14:10:30.311048    1934 log.go:172] (0xc0007e2e70) (0xc0007d70e0) Create stream\nI0210 14:10:30.311258    1934 log.go:172] (0xc0007e2e70) (0xc0007d70e0) Stream added, broadcasting: 1\nI0210 14:10:30.325214    1934 log.go:172] (0xc0007e2e70) Reply frame received for 1\nI0210 14:10:30.325328    1934 log.go:172] (0xc0007e2e70) (0xc000887040) Create stream\nI0210 14:10:30.325369    1934 log.go:172] (0xc0007e2e70) (0xc000887040) Stream added, broadcasting: 3\nI0210 14:10:30.327610    1934 log.go:172] (0xc0007e2e70) Reply frame received for 3\nI0210 14:10:30.327642    1934 log.go:172] (0xc0007e2e70) (0xc0008870e0) Create stream\nI0210 14:10:30.327651    1934 log.go:172] (0xc0007e2e70) (0xc0008870e0) Stream added, broadcasting: 5\nI0210 14:10:30.329223    1934 log.go:172] (0xc0007e2e70) Reply frame received for 5\nI0210 14:10:30.588804    1934 log.go:172] (0xc0007e2e70) Data frame received for 5\nI0210 14:10:30.588897    1934 log.go:172] (0xc0008870e0) (5) Data frame handling\nI0210 14:10:30.588923    1934 log.go:172] (0xc0008870e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 14:10:30.588990    1934 log.go:172] (0xc0007e2e70) Data frame received for 3\nI0210 14:10:30.589003    1934 log.go:172] (0xc000887040) (3) Data frame handling\nI0210 14:10:30.589023    1934 log.go:172] (0xc000887040) (3) Data frame sent\nI0210 14:10:30.927588    1934 log.go:172] (0xc0007e2e70) (0xc000887040) Stream removed, broadcasting: 3\nI0210 14:10:30.927786    1934 log.go:172] (0xc0007e2e70) (0xc0008870e0) Stream removed, broadcasting: 5\nI0210 14:10:30.927859    1934 log.go:172] (0xc0007e2e70) Data frame received for 1\nI0210 14:10:30.927874    1934 log.go:172] (0xc0007d70e0) (1) Data frame handling\nI0210 14:10:30.927904    1934 log.go:172] (0xc0007d70e0) (1) Data frame sent\nI0210 14:10:30.927924    1934 log.go:172] (0xc0007e2e70) (0xc0007d70e0) Stream removed, broadcasting: 1\nI0210 14:10:30.927951    1934 log.go:172] (0xc0007e2e70) Go away received\nI0210 14:10:30.928945    1934 log.go:172] (0xc0007e2e70) (0xc0007d70e0) Stream removed, broadcasting: 1\nI0210 14:10:30.929032    1934 log.go:172] (0xc0007e2e70) (0xc000887040) Stream removed, broadcasting: 3\nI0210 14:10:30.929249    1934 log.go:172] (0xc0007e2e70) (0xc0008870e0) Stream removed, broadcasting: 5\n"
Feb 10 14:10:30.936: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 14:10:30.936: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 14:10:30.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:10:31.291: INFO: stderr: "I0210 14:10:31.102193    1952 log.go:172] (0xc0001166e0) (0xc0004046e0) Create stream\nI0210 14:10:31.102287    1952 log.go:172] (0xc0001166e0) (0xc0004046e0) Stream added, broadcasting: 1\nI0210 14:10:31.106028    1952 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0210 14:10:31.106061    1952 log.go:172] (0xc0001166e0) (0xc000404780) Create stream\nI0210 14:10:31.106077    1952 log.go:172] (0xc0001166e0) (0xc000404780) Stream added, broadcasting: 3\nI0210 14:10:31.106772    1952 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0210 14:10:31.106793    1952 log.go:172] (0xc0001166e0) (0xc000404820) Create stream\nI0210 14:10:31.106800    1952 log.go:172] (0xc0001166e0) (0xc000404820) Stream added, broadcasting: 5\nI0210 14:10:31.107831    1952 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0210 14:10:31.218769    1952 log.go:172] (0xc0001166e0) Data frame received for 3\nI0210 14:10:31.218803    1952 log.go:172] (0xc000404780) (3) Data frame handling\nI0210 14:10:31.218820    1952 log.go:172] (0xc000404780) (3) Data frame sent\nI0210 14:10:31.220568    1952 log.go:172] (0xc0001166e0) Data frame received for 5\nI0210 14:10:31.220581    1952 log.go:172] (0xc000404820) (5) Data frame handling\nI0210 14:10:31.220592    1952 log.go:172] (0xc000404820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0210 14:10:31.284856    1952 log.go:172] (0xc0001166e0) Data frame received for 1\nI0210 14:10:31.284896    1952 log.go:172] (0xc0001166e0) (0xc000404780) Stream removed, broadcasting: 3\nI0210 14:10:31.284948    1952 log.go:172] (0xc0004046e0) (1) Data frame handling\nI0210 14:10:31.284973    1952 log.go:172] (0xc0004046e0) (1) Data frame sent\nI0210 14:10:31.284982    1952 log.go:172] (0xc0001166e0) (0xc0004046e0) Stream removed, broadcasting: 1\nI0210 14:10:31.285290    1952 log.go:172] (0xc0001166e0) (0xc000404820) Stream removed, broadcasting: 5\nI0210 14:10:31.285329    1952 log.go:172] (0xc0001166e0) Go away received\nI0210 14:10:31.285500    1952 log.go:172] (0xc0001166e0) (0xc0004046e0) Stream removed, broadcasting: 1\nI0210 14:10:31.285540    1952 log.go:172] (0xc0001166e0) (0xc000404780) Stream removed, broadcasting: 3\nI0210 14:10:31.285556    1952 log.go:172] (0xc0001166e0) (0xc000404820) Stream removed, broadcasting: 5\n"
Feb 10 14:10:31.291: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 14:10:31.291: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 14:10:31.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:10:31.629: INFO: stderr: "I0210 14:10:31.449479    1965 log.go:172] (0xc000930420) (0xc0009186e0) Create stream\nI0210 14:10:31.449596    1965 log.go:172] (0xc000930420) (0xc0009186e0) Stream added, broadcasting: 1\nI0210 14:10:31.455988    1965 log.go:172] (0xc000930420) Reply frame received for 1\nI0210 14:10:31.456047    1965 log.go:172] (0xc000930420) (0xc0001f2140) Create stream\nI0210 14:10:31.456056    1965 log.go:172] (0xc000930420) (0xc0001f2140) Stream added, broadcasting: 3\nI0210 14:10:31.457325    1965 log.go:172] (0xc000930420) Reply frame received for 3\nI0210 14:10:31.457350    1965 log.go:172] (0xc000930420) (0xc000706000) Create stream\nI0210 14:10:31.457362    1965 log.go:172] (0xc000930420) (0xc000706000) Stream added, broadcasting: 5\nI0210 14:10:31.460736    1965 log.go:172] (0xc000930420) Reply frame received for 5\nI0210 14:10:31.543307    1965 log.go:172] (0xc000930420) Data frame received for 5\nI0210 14:10:31.543411    1965 log.go:172] (0xc000706000) (5) Data frame handling\nI0210 14:10:31.543449    1965 log.go:172] (0xc000706000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 14:10:31.544323    1965 log.go:172] (0xc000930420) Data frame received for 5\nI0210 14:10:31.544342    1965 log.go:172] (0xc000706000) (5) Data frame handling\nI0210 14:10:31.544355    1965 log.go:172] (0xc000706000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0210 14:10:31.544597    1965 log.go:172] (0xc000930420) Data frame received for 3\nI0210 14:10:31.544604    1965 log.go:172] (0xc0001f2140) (3) Data frame handling\nI0210 14:10:31.544611    1965 log.go:172] (0xc0001f2140) (3) Data frame sent\nI0210 14:10:31.545658    1965 log.go:172] (0xc000930420) Data frame received for 5\nI0210 14:10:31.545668    1965 log.go:172] (0xc000706000) (5) Data frame handling\nI0210 14:10:31.545677    1965 log.go:172] (0xc000706000) (5) Data frame sent\n+ true\nI0210 14:10:31.624699    1965 log.go:172] (0xc000930420) Data frame received for 1\nI0210 14:10:31.624765    1965 log.go:172] (0xc000930420) (0xc0001f2140) Stream removed, broadcasting: 3\nI0210 14:10:31.624820    1965 log.go:172] (0xc0009186e0) (1) Data frame handling\nI0210 14:10:31.624831    1965 log.go:172] (0xc0009186e0) (1) Data frame sent\nI0210 14:10:31.624866    1965 log.go:172] (0xc000930420) (0xc000706000) Stream removed, broadcasting: 5\nI0210 14:10:31.624908    1965 log.go:172] (0xc000930420) (0xc0009186e0) Stream removed, broadcasting: 1\nI0210 14:10:31.624925    1965 log.go:172] (0xc000930420) Go away received\nI0210 14:10:31.625517    1965 log.go:172] (0xc000930420) (0xc0009186e0) Stream removed, broadcasting: 1\nI0210 14:10:31.625537    1965 log.go:172] (0xc000930420) (0xc0001f2140) Stream removed, broadcasting: 3\nI0210 14:10:31.625553    1965 log.go:172] (0xc000930420) (0xc000706000) Stream removed, broadcasting: 5\n"
Feb 10 14:10:31.629: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 14:10:31.629: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 14:10:31.637: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 14:10:31.637: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 14:10:31.637: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 10 14:10:31.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 14:10:32.077: INFO: stderr: "I0210 14:10:31.759815    1982 log.go:172] (0xc0001046e0) (0xc00027c6e0) Create stream\nI0210 14:10:31.759864    1982 log.go:172] (0xc0001046e0) (0xc00027c6e0) Stream added, broadcasting: 1\nI0210 14:10:31.764600    1982 log.go:172] (0xc0001046e0) Reply frame received for 1\nI0210 14:10:31.764629    1982 log.go:172] (0xc0001046e0) (0xc00027c780) Create stream\nI0210 14:10:31.764635    1982 log.go:172] (0xc0001046e0) (0xc00027c780) Stream added, broadcasting: 3\nI0210 14:10:31.765915    1982 log.go:172] (0xc0001046e0) Reply frame received for 3\nI0210 14:10:31.765938    1982 log.go:172] (0xc0001046e0) (0xc0005ea460) Create stream\nI0210 14:10:31.765947    1982 log.go:172] (0xc0001046e0) (0xc0005ea460) Stream added, broadcasting: 5\nI0210 14:10:31.767048    1982 log.go:172] (0xc0001046e0) Reply frame received for 5\nI0210 14:10:31.893271    1982 log.go:172] (0xc0001046e0) Data frame received for 3\nI0210 14:10:31.893309    1982 log.go:172] (0xc00027c780) (3) Data frame handling\nI0210 14:10:31.893323    1982 log.go:172] (0xc00027c780) (3) Data frame sent\nI0210 14:10:31.893794    1982 log.go:172] (0xc0001046e0) Data frame received for 5\nI0210 14:10:31.893805    1982 log.go:172] (0xc0005ea460) (5) Data frame handling\nI0210 14:10:31.893821    1982 log.go:172] (0xc0005ea460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 14:10:32.070702    1982 log.go:172] (0xc0001046e0) (0xc00027c780) Stream removed, broadcasting: 3\nI0210 14:10:32.070842    1982 log.go:172] (0xc0001046e0) (0xc0005ea460) Stream removed, broadcasting: 5\nI0210 14:10:32.071069    1982 log.go:172] (0xc0001046e0) Data frame received for 1\nI0210 14:10:32.071086    1982 log.go:172] (0xc00027c6e0) (1) Data frame handling\nI0210 14:10:32.071099    1982 log.go:172] (0xc00027c6e0) (1) Data frame sent\nI0210 14:10:32.071114    1982 log.go:172] (0xc0001046e0) (0xc00027c6e0) Stream removed, broadcasting: 1\nI0210 14:10:32.071132    1982 log.go:172] (0xc0001046e0) Go away received\nI0210 14:10:32.071505    1982 log.go:172] (0xc0001046e0) (0xc00027c6e0) Stream removed, broadcasting: 1\nI0210 14:10:32.071521    1982 log.go:172] (0xc0001046e0) (0xc00027c780) Stream removed, broadcasting: 3\nI0210 14:10:32.071527    1982 log.go:172] (0xc0001046e0) (0xc0005ea460) Stream removed, broadcasting: 5\n"
Feb 10 14:10:32.078: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 14:10:32.078: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 14:10:32.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 14:10:32.485: INFO: stderr: "I0210 14:10:32.284000    1999 log.go:172] (0xc000116dc0) (0xc000700780) Create stream\nI0210 14:10:32.284227    1999 log.go:172] (0xc000116dc0) (0xc000700780) Stream added, broadcasting: 1\nI0210 14:10:32.288607    1999 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0210 14:10:32.288642    1999 log.go:172] (0xc000116dc0) (0xc000700820) Create stream\nI0210 14:10:32.288648    1999 log.go:172] (0xc000116dc0) (0xc000700820) Stream added, broadcasting: 3\nI0210 14:10:32.290307    1999 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0210 14:10:32.290354    1999 log.go:172] (0xc000116dc0) (0xc0005101e0) Create stream\nI0210 14:10:32.290364    1999 log.go:172] (0xc000116dc0) (0xc0005101e0) Stream added, broadcasting: 5\nI0210 14:10:32.291942    1999 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0210 14:10:32.380694    1999 log.go:172] (0xc000116dc0) Data frame received for 5\nI0210 14:10:32.381071    1999 log.go:172] (0xc0005101e0) (5) Data frame handling\nI0210 14:10:32.381116    1999 log.go:172] (0xc0005101e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 14:10:32.410396    1999 log.go:172] (0xc000116dc0) Data frame received for 3\nI0210 14:10:32.410487    1999 log.go:172] (0xc000700820) (3) Data frame handling\nI0210 14:10:32.410502    1999 log.go:172] (0xc000700820) (3) Data frame sent\nI0210 14:10:32.479309    1999 log.go:172] (0xc000116dc0) (0xc000700820) Stream removed, broadcasting: 3\nI0210 14:10:32.479415    1999 log.go:172] (0xc000116dc0) Data frame received for 1\nI0210 14:10:32.479427    1999 log.go:172] (0xc000700780) (1) Data frame handling\nI0210 14:10:32.479441    1999 log.go:172] (0xc000700780) (1) Data frame sent\nI0210 14:10:32.479451    1999 log.go:172] (0xc000116dc0) (0xc000700780) Stream removed, broadcasting: 1\nI0210 14:10:32.479623    1999 log.go:172] (0xc000116dc0) (0xc0005101e0) Stream removed, broadcasting: 5\nI0210 14:10:32.479794    1999 log.go:172] (0xc000116dc0) (0xc000700780) Stream removed, broadcasting: 1\nI0210 14:10:32.479809    1999 log.go:172] (0xc000116dc0) (0xc000700820) Stream removed, broadcasting: 3\nI0210 14:10:32.479818    1999 log.go:172] (0xc000116dc0) (0xc0005101e0) Stream removed, broadcasting: 5\nI0210 14:10:32.479978    1999 log.go:172] (0xc000116dc0) Go away received\n"
Feb 10 14:10:32.486: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 14:10:32.486: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 14:10:32.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 14:10:32.970: INFO: stderr: "I0210 14:10:32.636424    2018 log.go:172] (0xc0009fa2c0) (0xc000714780) Create stream\nI0210 14:10:32.636542    2018 log.go:172] (0xc0009fa2c0) (0xc000714780) Stream added, broadcasting: 1\nI0210 14:10:32.655813    2018 log.go:172] (0xc0009fa2c0) Reply frame received for 1\nI0210 14:10:32.655895    2018 log.go:172] (0xc0009fa2c0) (0xc00003bae0) Create stream\nI0210 14:10:32.655902    2018 log.go:172] (0xc0009fa2c0) (0xc00003bae0) Stream added, broadcasting: 3\nI0210 14:10:32.659619    2018 log.go:172] (0xc0009fa2c0) Reply frame received for 3\nI0210 14:10:32.659656    2018 log.go:172] (0xc0009fa2c0) (0xc000714820) Create stream\nI0210 14:10:32.659668    2018 log.go:172] (0xc0009fa2c0) (0xc000714820) Stream added, broadcasting: 5\nI0210 14:10:32.661746    2018 log.go:172] (0xc0009fa2c0) Reply frame received for 5\nI0210 14:10:32.758727    2018 log.go:172] (0xc0009fa2c0) Data frame received for 5\nI0210 14:10:32.758757    2018 log.go:172] (0xc000714820) (5) Data frame handling\nI0210 14:10:32.758775    2018 log.go:172] (0xc000714820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 14:10:32.803519    2018 log.go:172] (0xc0009fa2c0) Data frame received for 3\nI0210 14:10:32.803536    2018 log.go:172] (0xc00003bae0) (3) Data frame handling\nI0210 14:10:32.803551    2018 log.go:172] (0xc00003bae0) (3) Data frame sent\nI0210 14:10:32.963583    2018 log.go:172] (0xc0009fa2c0) Data frame received for 1\nI0210 14:10:32.963672    2018 log.go:172] (0xc000714780) (1) Data frame handling\nI0210 14:10:32.963703    2018 log.go:172] (0xc000714780) (1) Data frame sent\nI0210 14:10:32.963719    2018 log.go:172] (0xc0009fa2c0) (0xc00003bae0) Stream removed, broadcasting: 3\nI0210 14:10:32.963755    2018 log.go:172] (0xc0009fa2c0) (0xc000714780) Stream removed, broadcasting: 1\nI0210 14:10:32.963776    2018 log.go:172] (0xc0009fa2c0) (0xc000714820) Stream removed, broadcasting: 5\nI0210 14:10:32.963811    2018 log.go:172] (0xc0009fa2c0) Go away received\nI0210 14:10:32.964236    2018 log.go:172] (0xc0009fa2c0) (0xc000714780) Stream removed, broadcasting: 1\nI0210 14:10:32.964303    2018 log.go:172] (0xc0009fa2c0) (0xc00003bae0) Stream removed, broadcasting: 3\nI0210 14:10:32.964348    2018 log.go:172] (0xc0009fa2c0) (0xc000714820) Stream removed, broadcasting: 5\n"
Feb 10 14:10:32.971: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 14:10:32.971: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 14:10:32.971: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 14:10:33.032: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 10 14:10:43.045: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 14:10:43.045: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 14:10:43.045: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 10 14:10:43.082: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:43.082: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:43.082: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:43.082: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:43.082: INFO: 
Feb 10 14:10:43.082: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:44.709: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:44.710: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:44.710: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:44.710: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:44.710: INFO: 
Feb 10 14:10:44.710: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:45.720: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:45.721: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:45.721: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:45.721: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:45.721: INFO: 
Feb 10 14:10:45.721: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:47.025: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:47.025: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:47.026: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:47.026: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:47.026: INFO: 
Feb 10 14:10:47.026: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:48.038: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:48.038: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:48.038: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:48.038: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:48.038: INFO: 
Feb 10 14:10:48.038: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:49.049: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:49.049: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:49.049: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:49.049: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:49.049: INFO: 
Feb 10 14:10:49.049: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:50.126: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:50.126: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:09:59 +0000 UTC  }]
Feb 10 14:10:50.126: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:50.126: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:50.126: INFO: 
Feb 10 14:10:50.127: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 10 14:10:51.146: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:51.147: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:51.147: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:51.147: INFO: 
Feb 10 14:10:51.147: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 10 14:10:52.165: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 10 14:10:52.165: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:52.166: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:10:19 +0000 UTC  }]
Feb 10 14:10:52.166: INFO: 
Feb 10 14:10:52.166: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5999
Feb 10 14:10:53.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:10:53.416: INFO: rc: 1
Feb 10 14:10:53.416: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002c167b0 exit status 1   true [0xc000011f38 0xc0000ea4a8 0xc0000ea830] [0xc000011f38 0xc0000ea4a8 0xc0000ea830] [0xc0000ea3b8 0xc0000ea7a0] [0xba6c50 0xba6c50] 0xc0022f7080 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 10 14:11:03.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:11:03.577: INFO: rc: 1
Feb 10 14:11:03.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002bde330 exit status 1   true [0xc002388018 0xc002388060 0xc0023880f0] [0xc002388018 0xc002388060 0xc0023880f0] [0xc002388050 0xc0023880b0] [0xba6c50 0xba6c50] 0xc001f19d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:11:13.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:11:13.759: INFO: rc: 1
Feb 10 14:11:13.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002bde3f0 exit status 1   true [0xc002388100 0xc002388148 0xc002388170] [0xc002388100 0xc002388148 0xc002388170] [0xc002388138 0xc002388160] [0xba6c50 0xba6c50] 0xc0021b2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:11:23.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:11:23.891: INFO: rc: 1
Feb 10 14:11:23.891: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cffb0 exit status 1   true [0xc000bef628 0xc000bef7c0 0xc000bef9b8] [0xc000bef628 0xc000bef7c0 0xc000bef9b8] [0xc000bef6d0 0xc000bef950] [0xba6c50 0xba6c50] 0xc001886de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:11:33.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:11:34.057: INFO: rc: 1
Feb 10 14:11:34.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a2090 exit status 1   true [0xc000befa70 0xc000befbf0 0xc000befdc8] [0xc000befa70 0xc000befbf0 0xc000befdc8] [0xc000befba8 0xc000befd30] [0xba6c50 0xba6c50] 0xc0017f08a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:11:44.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:11:44.190: INFO: rc: 1
Feb 10 14:11:44.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002bde4e0 exit status 1   true [0xc002388180 0xc0023881d0 0xc002388210] [0xc002388180 0xc0023881d0 0xc002388210] [0xc002388198 0xc0023881f0] [0xba6c50 0xba6c50] 0xc001fd5380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:11:54.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:11:54.323: INFO: rc: 1
Feb 10 14:11:54.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a636b0 exit status 1   true [0xc00155d060 0xc00155d098 0xc00155d0c0] [0xc00155d060 0xc00155d098 0xc00155d0c0] [0xc00155d078 0xc00155d0b8] [0xba6c50 0xba6c50] 0xc002b75440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:12:04.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:12:04.487: INFO: rc: 1
Feb 10 14:12:04.487: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a637a0 exit status 1   true [0xc00155d0d0 0xc00155d110 0xc00155d1a8] [0xc00155d0d0 0xc00155d110 0xc00155d1a8] [0xc00155d100 0xc00155d138] [0xba6c50 0xba6c50] 0xc002b757a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:12:14.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:12:14.644: INFO: rc: 1
Feb 10 14:12:14.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a2150 exit status 1   true [0xc000befe28 0xc000beffb8 0xc000854040] [0xc000befe28 0xc000beffb8 0xc000854040] [0xc000beff58 0xc000854018] [0xba6c50 0xba6c50] 0xc0019a2660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:12:24.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:12:25.284: INFO: rc: 1
Feb 10 14:12:25.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002c168a0 exit status 1   true [0xc0000ea920 0xc0000eab60 0xc0000eaea0] [0xc0000ea920 0xc0000eab60 0xc0000eaea0] [0xc0000eaa28 0xc0000eae20] [0xba6c50 0xba6c50] 0xc0022f7860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:12:35.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:12:35.432: INFO: rc: 1
Feb 10 14:12:35.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cfa40 exit status 1   true [0xc000bee278 0xc000bee7b0 0xc000bef158] [0xc000bee278 0xc000bee7b0 0xc000bef158] [0xc000bee6e8 0xc000beee30] [0xba6c50 0xba6c50] 0xc0017f0f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:12:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:12:45.593: INFO: rc: 1
Feb 10 14:12:45.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cfb90 exit status 1   true [0xc000bef490 0xc000bef628 0xc000bef7c0] [0xc000bef490 0xc000bef628 0xc000bef7c0] [0xc000bef5b0 0xc000bef6d0] [0xba6c50 0xba6c50] 0xc0000ba120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:12:55.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:12:55.798: INFO: rc: 1
Feb 10 14:12:55.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023b4060 exit status 1   true [0xc00054c038 0xc00054d9a8 0xc00054da50] [0xc00054c038 0xc00054d9a8 0xc00054da50] [0xc00054d928 0xc00054da40] [0xba6c50 0xba6c50] 0xc0021b2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:13:05.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:13:06.015: INFO: rc: 1
Feb 10 14:13:06.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cfc80 exit status 1   true [0xc000bef8f0 0xc000befa70 0xc000befbf0] [0xc000bef8f0 0xc000befa70 0xc000befbf0] [0xc000bef9b8 0xc000befba8] [0xba6c50 0xba6c50] 0xc001f197a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:13:16.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:13:16.153: INFO: rc: 1
Feb 10 14:13:16.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f94090 exit status 1   true [0xc000010010 0xc000011548 0xc0000116a0] [0xc000010010 0xc000011548 0xc0000116a0] [0xc0000113e0 0xc0000115f8] [0xba6c50 0xba6c50] 0xc0021e0840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:13:26.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:13:26.302: INFO: rc: 1
Feb 10 14:13:26.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023b4150 exit status 1   true [0xc00054da78 0xc000854018 0xc000854078] [0xc00054da78 0xc000854018 0xc000854078] [0xc000854008 0xc000854058] [0xba6c50 0xba6c50] 0xc00201f2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:13:36.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:13:36.446: INFO: rc: 1
Feb 10 14:13:36.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023b4240 exit status 1   true [0xc000854088 0xc0008540b0 0xc000854138] [0xc000854088 0xc0008540b0 0xc000854138] [0xc0008540a0 0xc0008540f8] [0xba6c50 0xba6c50] 0xc002ae4060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:13:46.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:13:46.580: INFO: rc: 1
Feb 10 14:13:46.581: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cfd70 exit status 1   true [0xc000befca8 0xc000befe28 0xc000beffb8] [0xc000befca8 0xc000befe28 0xc000beffb8] [0xc000befdc8 0xc000beff58] [0xba6c50 0xba6c50] 0xc0022f6240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:13:56.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:13:56.736: INFO: rc: 1
Feb 10 14:13:56.736: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cfe90 exit status 1   true [0xc0000ea3b8 0xc0000ea7a0 0xc0000ea9e8] [0xc0000ea3b8 0xc0000ea7a0 0xc0000ea9e8] [0xc0000ea5f8 0xc0000ea920] [0xba6c50 0xba6c50] 0xc0022f6660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:14:06.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:14:06.829: INFO: rc: 1
Feb 10 14:14:06.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a2000 exit status 1   true [0xc0000eaa28 0xc0000eae20 0xc0000eaf20] [0xc0000eaa28 0xc0000eae20 0xc0000eaf20] [0xc0000ead50 0xc0000eaec0] [0xba6c50 0xba6c50] 0xc0022f69c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:14:16.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:14:16.934: INFO: rc: 1
Feb 10 14:14:16.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f94180 exit status 1   true [0xc0000116f8 0xc0000117c8 0xc000011988] [0xc0000116f8 0xc0000117c8 0xc000011988] [0xc000011778 0xc000011840] [0xba6c50 0xba6c50] 0xc0021e1440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:14:26.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:14:28.739: INFO: rc: 1
Feb 10 14:14:28.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023b4330 exit status 1   true [0xc000854178 0xc0008541d0 0xc000854210] [0xc000854178 0xc0008541d0 0xc000854210] [0xc0008541c8 0xc000854200] [0xba6c50 0xba6c50] 0xc002ae4480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:14:38.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:14:38.898: INFO: rc: 1
Feb 10 14:14:38.898: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cf9e0 exit status 1   true [0xc00054d900 0xc00054d9e8 0xc00054da78] [0xc00054d900 0xc00054d9e8 0xc00054da78] [0xc00054d9a8 0xc00054da50] [0xba6c50 0xba6c50] 0xc00201e720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:14:48.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:14:49.052: INFO: rc: 1
Feb 10 14:14:49.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023b40c0 exit status 1   true [0xc000bee050 0xc000bee6e8 0xc000beee30] [0xc000bee050 0xc000bee6e8 0xc000beee30] [0xc000bee4e0 0xc000bee908] [0xba6c50 0xba6c50] 0xc001f19d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:14:59.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:14:59.202: INFO: rc: 1
Feb 10 14:14:59.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023b41b0 exit status 1   true [0xc000bef158 0xc000bef5b0 0xc000bef6d0] [0xc000bef158 0xc000bef5b0 0xc000bef6d0] [0xc000bef4e8 0xc000bef640] [0xba6c50 0xba6c50] 0xc0021b2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:15:09.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:15:09.336: INFO: rc: 1
Feb 10 14:15:09.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002aae0c0 exit status 1   true [0xc000854008 0xc000854058 0xc000854098] [0xc000854008 0xc000854058 0xc000854098] [0xc000854040 0xc000854088] [0xba6c50 0xba6c50] 0xc0017f04e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:15:19.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:15:19.468: INFO: rc: 1
Feb 10 14:15:19.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a20f0 exit status 1   true [0xc0000ea3b8 0xc0000ea7a0 0xc0000ea9e8] [0xc0000ea3b8 0xc0000ea7a0 0xc0000ea9e8] [0xc0000ea5f8 0xc0000ea920] [0xba6c50 0xba6c50] 0xc002ae4360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:15:29.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:15:29.650: INFO: rc: 1
Feb 10 14:15:29.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a21b0 exit status 1   true [0xc0000eaa28 0xc0000eae20 0xc0000eaf20] [0xc0000eaa28 0xc0000eae20 0xc0000eaf20] [0xc0000ead50 0xc0000eaec0] [0xba6c50 0xba6c50] 0xc002ae4840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:15:39.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:15:39.787: INFO: rc: 1
Feb 10 14:15:39.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a22a0 exit status 1   true [0xc0000eb058 0xc0000eb138 0xc0000eb278] [0xc0000eb058 0xc0000eb138 0xc0000eb278] [0xc0000eb130 0xc0000eb1f8] [0xba6c50 0xba6c50] 0xc002ae5ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:15:49.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:15:49.978: INFO: rc: 1
Feb 10 14:15:49.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0005cfc20 exit status 1   true [0xc00054db18 0xc0000113e0 0xc0000115f8] [0xc00054db18 0xc0000113e0 0xc0000115f8] [0xc000011380 0xc0000115c0] [0xba6c50 0xba6c50] 0xc0022f6120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 10 14:15:59.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 14:16:00.206: INFO: rc: 1
Feb 10 14:16:00.206: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb 10 14:16:00.206: INFO: Scaling statefulset ss to 0
Feb 10 14:16:00.215: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 10 14:16:00.221: INFO: Deleting all statefulset in ns statefulset-5999
Feb 10 14:16:00.223: INFO: Scaling statefulset ss to 0
Feb 10 14:16:00.231: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 14:16:00.234: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:16:00.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5999" for this suite.
Feb 10 14:16:06.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:16:06.420: INFO: namespace statefulset-5999 deletion completed in 6.157848096s

• [SLOW TEST:367.701 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:16:06.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 10 14:16:15.174: INFO: Successfully updated pod "labelsupdate4ced9419-7c1b-4320-9e71-2df5430fdcb8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:16:17.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8840" for this suite.
Feb 10 14:16:39.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:16:39.393: INFO: namespace projected-8840 deletion completed in 22.16371149s

• [SLOW TEST:32.973 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:16:39.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-knr7
STEP: Creating a pod to test atomic-volume-subpath
Feb 10 14:16:39.568: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-knr7" in namespace "subpath-4493" to be "success or failure"
Feb 10 14:16:39.665: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Pending", Reason="", readiness=false. Elapsed: 96.405268ms
Feb 10 14:16:41.677: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108438348s
Feb 10 14:16:43.737: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16901072s
Feb 10 14:16:45.971: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402641237s
Feb 10 14:16:47.978: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 8.409953702s
Feb 10 14:16:49.989: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 10.420280246s
Feb 10 14:16:51.995: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 12.426840438s
Feb 10 14:16:54.008: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 14.43939786s
Feb 10 14:16:56.015: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 16.446892662s
Feb 10 14:16:58.025: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 18.457238245s
Feb 10 14:17:00.039: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 20.470308038s
Feb 10 14:17:02.047: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 22.478390629s
Feb 10 14:17:04.056: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 24.487753942s
Feb 10 14:17:06.061: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 26.493216096s
Feb 10 14:17:08.068: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Running", Reason="", readiness=true. Elapsed: 28.499566126s
Feb 10 14:17:10.075: INFO: Pod "pod-subpath-test-configmap-knr7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.506652307s
STEP: Saw pod success
Feb 10 14:17:10.075: INFO: Pod "pod-subpath-test-configmap-knr7" satisfied condition "success or failure"
Feb 10 14:17:10.079: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-knr7 container test-container-subpath-configmap-knr7: 
STEP: delete the pod
Feb 10 14:17:10.228: INFO: Waiting for pod pod-subpath-test-configmap-knr7 to disappear
Feb 10 14:17:10.237: INFO: Pod pod-subpath-test-configmap-knr7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-knr7
Feb 10 14:17:10.237: INFO: Deleting pod "pod-subpath-test-configmap-knr7" in namespace "subpath-4493"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:17:10.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4493" for this suite.
Feb 10 14:17:16.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:17:16.628: INFO: namespace subpath-4493 deletion completed in 6.336695715s

• [SLOW TEST:37.235 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:17:16.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:18:10.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2373" for this suite.
Feb 10 14:18:16.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:18:16.427: INFO: namespace container-runtime-2373 deletion completed in 6.166543719s

• [SLOW TEST:59.797 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:18:16.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:18:16.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:18:24.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2329" for this suite.
Feb 10 14:19:26.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:19:26.887: INFO: namespace pods-2329 deletion completed in 1m2.203084616s

• [SLOW TEST:70.460 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:19:26.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:19:27.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127" in namespace "downward-api-8206" to be "success or failure"
Feb 10 14:19:27.052: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127": Phase="Pending", Reason="", readiness=false. Elapsed: 27.911006ms
Feb 10 14:19:29.066: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042387025s
Feb 10 14:19:31.076: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052353735s
Feb 10 14:19:33.083: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058706676s
Feb 10 14:19:35.096: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072231149s
Feb 10 14:19:37.110: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085847154s
STEP: Saw pod success
Feb 10 14:19:37.110: INFO: Pod "downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127" satisfied condition "success or failure"
Feb 10 14:19:37.114: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127 container client-container: 
STEP: delete the pod
Feb 10 14:19:37.180: INFO: Waiting for pod downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127 to disappear
Feb 10 14:19:37.196: INFO: Pod downwardapi-volume-4182dbb7-380d-4ae6-ad00-b65bf1dca127 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:19:37.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8206" for this suite.
Feb 10 14:19:43.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:19:43.418: INFO: namespace downward-api-8206 deletion completed in 6.214126417s

• [SLOW TEST:16.531 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:19:43.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:19:43.530: INFO: Creating deployment "nginx-deployment"
Feb 10 14:19:43.595: INFO: Waiting for observed generation 1
Feb 10 14:19:46.633: INFO: Waiting for all required pods to come up
Feb 10 14:19:47.032: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 10 14:20:11.581: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 10 14:20:11.595: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 10 14:20:11.617: INFO: Updating deployment nginx-deployment
Feb 10 14:20:11.617: INFO: Waiting for observed generation 2
Feb 10 14:20:14.370: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 10 14:20:14.377: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 10 14:20:14.953: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 10 14:20:15.958: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 10 14:20:15.959: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 10 14:20:15.963: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 10 14:20:15.970: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 10 14:20:15.970: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 10 14:20:15.979: INFO: Updating deployment nginx-deployment
Feb 10 14:20:15.979: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 10 14:20:17.153: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 10 14:20:17.328: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 10 14:20:22.031: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8344,SelfLink:/apis/apps/v1/namespaces/deployment-8344/deployments/nginx-deployment,UID:bde8a946-aa67-449a-8d18-c681313b98ec,ResourceVersion:23829913,Generation:3,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-10 14:20:14 +0000 UTC 2020-02-10 14:19:43 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-10 14:20:17 +0000 UTC 2020-02-10 14:20:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 10 14:20:22.967: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8344,SelfLink:/apis/apps/v1/namespaces/deployment-8344/replicasets/nginx-deployment-55fb7cb77f,UID:8de8c8b4-7b52-4eeb-a549-889e7739c9b1,ResourceVersion:23829923,Generation:3,CreationTimestamp:2020-02-10 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bde8a946-aa67-449a-8d18-c681313b98ec 0xc002539dc7 0xc002539dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 14:20:22.967: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 10 14:20:22.968: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8344,SelfLink:/apis/apps/v1/namespaces/deployment-8344/replicasets/nginx-deployment-7b8c6f4498,UID:4bdcc7ef-e711-4542-be13-3c92a9c46c17,ResourceVersion:23829912,Generation:3,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bde8a946-aa67-449a-8d18-c681313b98ec 0xc002539e97 0xc002539e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 10 14:20:24.376: INFO: Pod "nginx-deployment-55fb7cb77f-27htq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-27htq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-27htq,UID:1f20e819-796e-42ce-92ca-2e8774328d08,ResourceVersion:23829828,Generation:0,CreationTimestamp:2020-02-10 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc0018b7de7 0xc0018b7de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018b7e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018b7e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-10 14:20:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.376: INFO: Pod "nginx-deployment-55fb7cb77f-4ctgr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4ctgr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-4ctgr,UID:5f8e5f03-e37b-4437-a3d9-f0d7ada89aaf,ResourceVersion:23829927,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc0018b7f57 0xc0018b7f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017240b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-10 14:20:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.376: INFO: Pod "nginx-deployment-55fb7cb77f-55r7h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-55r7h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-55r7h,UID:812ecf06-4f5a-4fd0-b6be-f9abdd033d09,ResourceVersion:23829845,Generation:0,CreationTimestamp:2020-02-10 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001724247 0xc001724248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001724460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-10 14:20:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.377: INFO: Pod "nginx-deployment-55fb7cb77f-7m8cc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7m8cc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-7m8cc,UID:a64f7951-93c8-41d5-905f-4fd5d2b57596,ResourceVersion:23829889,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001724607 0xc001724608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017246b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.377: INFO: Pod "nginx-deployment-55fb7cb77f-dhrr4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dhrr4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-dhrr4,UID:fd10f7a7-7cea-488f-9438-e0213a9d9769,ResourceVersion:23829844,Generation:0,CreationTimestamp:2020-02-10 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001724737 0xc001724738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001724840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-10 14:20:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.377: INFO: Pod "nginx-deployment-55fb7cb77f-fxhmv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fxhmv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-fxhmv,UID:d5edeed9-be26-4216-8059-ee8e1d51314e,ResourceVersion:23829848,Generation:0,CreationTimestamp:2020-02-10 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001724917 0xc001724918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001724a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-10 14:20:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.377: INFO: Pod "nginx-deployment-55fb7cb77f-g84cd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g84cd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-g84cd,UID:2cd2c1e3-a7ab-4fe5-9336-6194974eca5e,ResourceVersion:23829888,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001724c47 0xc001724c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001724d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.378: INFO: Pod "nginx-deployment-55fb7cb77f-hzq5d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hzq5d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-hzq5d,UID:6c0f5714-0335-4bd6-b9d7-9421973c79d5,ResourceVersion:23829921,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001724e77 0xc001724e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001724f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001724f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-10 14:20:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.378: INFO: Pod "nginx-deployment-55fb7cb77f-j5qdx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j5qdx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-j5qdx,UID:c923c9f6-28e8-4af4-8009-f7ddc961d899,ResourceVersion:23829910,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001725137 0xc001725138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-10 14:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.378: INFO: Pod "nginx-deployment-55fb7cb77f-nlx2c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nlx2c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-nlx2c,UID:e33931ee-dd5d-4e22-909e-0c07bce40704,ResourceVersion:23829884,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001725427 0xc001725428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.379: INFO: Pod "nginx-deployment-55fb7cb77f-r5fbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r5fbf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-r5fbf,UID:a1a584ea-dd39-4f32-83e3-f436de9aaa3f,ResourceVersion:23829823,Generation:0,CreationTimestamp:2020-02-10 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001725647 0xc001725648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-10 14:20:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.379: INFO: Pod "nginx-deployment-55fb7cb77f-r8lt9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r8lt9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-r8lt9,UID:638ff314-10f9-4172-82c4-69f19657bb1a,ResourceVersion:23829902,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001725957 0xc001725958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.380: INFO: Pod "nginx-deployment-55fb7cb77f-tfjt6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tfjt6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-55fb7cb77f-tfjt6,UID:fd5d1957-6313-4694-8df8-c994466e4a7b,ResourceVersion:23829890,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8de8c8b4-7b52-4eeb-a549-889e7739c9b1 0xc001725b07 0xc001725b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.380: INFO: Pod "nginx-deployment-7b8c6f4498-22lcx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-22lcx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-22lcx,UID:a7b89876-0907-4485-9708-1b1b358f7e26,ResourceVersion:23829915,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc001725d27 0xc001725d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-10 14:20:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.380: INFO: Pod "nginx-deployment-7b8c6f4498-2gwj6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2gwj6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-2gwj6,UID:9216446e-60f4-46cb-bbf4-1bce47808387,ResourceVersion:23829906,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc001725f57 0xc001725f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001725fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001725ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.381: INFO: Pod "nginx-deployment-7b8c6f4498-2tf9l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2tf9l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-2tf9l,UID:c22f49d7-48a9-4545-ab2c-a2f00b08af7c,ResourceVersion:23829886,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d540e7 0xc000d540e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d54190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d54270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.382: INFO: Pod "nginx-deployment-7b8c6f4498-2xqj7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2xqj7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-2xqj7,UID:669cf221-f738-4c7e-ae19-35fa3dc8dda7,ResourceVersion:23829792,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d54307 0xc000d54308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d543d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d543f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-10 14:19:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4fb01a45d8aa14bc64036d10c6820d024d78282f6c7b81a34277d3e72db57d72}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.382: INFO: Pod "nginx-deployment-7b8c6f4498-56xl4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-56xl4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-56xl4,UID:291b8020-afe3-4115-b3a0-3967d3d160c9,ResourceVersion:23829908,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d54597 0xc000d54598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d54640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d546c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.383: INFO: Pod "nginx-deployment-7b8c6f4498-5zlks" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5zlks,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-5zlks,UID:617099b5-e9e0-4f07-bc95-191c169d58bd,ResourceVersion:23829762,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d54767 0xc000d54768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d54840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d54870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-10 14:19:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a7e4393db8297b0fda815707f229738eff7c690681ee0664cc332034301fdaea}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.383: INFO: Pod "nginx-deployment-7b8c6f4498-99wwg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-99wwg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-99wwg,UID:562b9d73-1095-46dd-943f-53874438b014,ResourceVersion:23829756,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d549d7 0xc000d549d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d54aa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d54ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-10 14:19:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fe292e7669b1b3c05989011dc79cb6b8216d29c837d3da23f792cd7541f8c0a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.384: INFO: Pod "nginx-deployment-7b8c6f4498-bdjtr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bdjtr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-bdjtr,UID:320df3b2-f346-40d7-8bf1-4cd4f941876b,ResourceVersion:23829887,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d54c47 0xc000d54c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d54d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d54d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.384: INFO: Pod "nginx-deployment-7b8c6f4498-c7vkj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c7vkj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-c7vkj,UID:4e23568d-acd3-45fd-bf69-928fda077920,ResourceVersion:23829907,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d54e77 0xc000d54e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d54f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d54f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.385: INFO: Pod "nginx-deployment-7b8c6f4498-dz8wc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dz8wc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-dz8wc,UID:4dd4af7b-eaac-4b8c-b188-1045a29ca3f9,ResourceVersion:23829749,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d55007 0xc000d55008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d55170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d55190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-10 14:19:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://63942542f5d48f5971f530ca974c34914520b502af43f7c158c17ba6131513e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.385: INFO: Pod "nginx-deployment-7b8c6f4498-fx89z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fx89z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-fx89z,UID:b4a9d3e1-d40b-4633-aecd-52bba71e8823,ResourceVersion:23829903,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d55447 0xc000d55448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d55580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d555a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.385: INFO: Pod "nginx-deployment-7b8c6f4498-gfdnm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gfdnm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-gfdnm,UID:610b1808-9828-4904-8845-0f9e4fea0a3f,ResourceVersion:23829885,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d556e7 0xc000d556e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d55770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d55790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.386: INFO: Pod "nginx-deployment-7b8c6f4498-gs4b2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gs4b2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-gs4b2,UID:9bfc1eac-3efd-463a-8e51-6b55c0296ab4,ResourceVersion:23829759,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d55817 0xc000d55818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d55880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d55910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-10 14:19:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d992556a4b878164ef8eac8c30c5515ecfb92287d1fb96e169ce45d5e9d28197}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.386: INFO: Pod "nginx-deployment-7b8c6f4498-hcl9b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hcl9b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-hcl9b,UID:48460189-e85d-4f3a-aa01-e70e7b3912d0,ResourceVersion:23829891,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d559e7 0xc000d559e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d55a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d55a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.386: INFO: Pod "nginx-deployment-7b8c6f4498-j5tfc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j5tfc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-j5tfc,UID:24fdc806-fc97-4f47-8698-03468d0bc623,ResourceVersion:23829876,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc000d55c37 0xc000d55c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d55df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d55e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.387: INFO: Pod "nginx-deployment-7b8c6f4498-lhhs5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lhhs5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-lhhs5,UID:700c8dab-1e4c-4cab-81eb-c00ca0269ba2,ResourceVersion:23829776,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc001760097 0xc001760098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001760110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001760130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-10 14:19:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8b55cce7935b9904c30ec60050ac7fe7e68354d3acbc8bfd87c68e2c3d1535c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.387: INFO: Pod "nginx-deployment-7b8c6f4498-sc47f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sc47f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-sc47f,UID:f1d29ff5-6f4f-430d-a151-8fc9313a8bbf,ResourceVersion:23829904,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc001760317 0xc001760318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017603b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017603f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.387: INFO: Pod "nginx-deployment-7b8c6f4498-th94x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-th94x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-th94x,UID:b39845ef-c73e-4e09-862e-01517a48800d,ResourceVersion:23829783,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc0017604e7 0xc0017604e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001760600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001760620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-10 14:19:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c7bd64aaac29d5f1bc87ff979fbdea2fda45fbb35d4efab2afa32e5e0e0356b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.387: INFO: Pod "nginx-deployment-7b8c6f4498-tw6kc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tw6kc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-tw6kc,UID:99f9c064-285f-480e-8a7b-07aff790e64c,ResourceVersion:23829929,Generation:0,CreationTimestamp:2020-02-10 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc001760747 0xc001760748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017607b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017607d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-10 14:20:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:20:24.388: INFO: Pod "nginx-deployment-7b8c6f4498-xjbr7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xjbr7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8344,SelfLink:/api/v1/namespaces/deployment-8344/pods/nginx-deployment-7b8c6f4498-xjbr7,UID:560bf764-172f-4bcb-ae58-cee62994d429,ResourceVersion:23829786,Generation:0,CreationTimestamp:2020-02-10 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4bdcc7ef-e711-4542-be13-3c92a9c46c17 0xc001760897 0xc001760898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9wwtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9wwtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9wwtm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017609b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017609d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-10 14:19:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:20:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://94321f041bc39cdff7346bd5bc284af857a73a6a51ee465cde71738dd0a428c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:20:24.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8344" for this suite.
Feb 10 14:21:40.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:21:40.910: INFO: namespace deployment-8344 deletion completed in 1m14.493924713s

• [SLOW TEST:117.491 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:21:40.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-386c9c6a-918c-443e-bddc-6c6a7518b91e in namespace container-probe-6438
Feb 10 14:21:49.095: INFO: Started pod liveness-386c9c6a-918c-443e-bddc-6c6a7518b91e in namespace container-probe-6438
STEP: checking the pod's current state and verifying that restartCount is present
Feb 10 14:21:49.098: INFO: Initial restart count of pod liveness-386c9c6a-918c-443e-bddc-6c6a7518b91e is 0
Feb 10 14:22:13.216: INFO: Restart count of pod container-probe-6438/liveness-386c9c6a-918c-443e-bddc-6c6a7518b91e is now 1 (24.117240777s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:22:13.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6438" for this suite.
Feb 10 14:22:19.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:22:19.420: INFO: namespace container-probe-6438 deletion completed in 6.158507133s

• [SLOW TEST:38.510 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:22:19.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:22:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6641" for this suite.
Feb 10 14:22:50.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:22:50.824: INFO: namespace replication-controller-6641 deletion completed in 22.160310106s

• [SLOW TEST:31.403 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:22:50.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f605f99c-19f8-4e0e-bf9a-200e2ad96e1d
STEP: Creating a pod to test consume configMaps
Feb 10 14:22:50.935: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1" in namespace "projected-1154" to be "success or failure"
Feb 10 14:22:50.945: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.511403ms
Feb 10 14:22:52.954: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019245651s
Feb 10 14:22:54.969: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033459865s
Feb 10 14:22:56.979: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044350538s
Feb 10 14:22:58.993: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058056853s
Feb 10 14:23:01.003: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067571839s
STEP: Saw pod success
Feb 10 14:23:01.003: INFO: Pod "pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1" satisfied condition "success or failure"
Feb 10 14:23:01.010: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 10 14:23:01.125: INFO: Waiting for pod pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1 to disappear
Feb 10 14:23:01.131: INFO: Pod pod-projected-configmaps-38f79b22-e17c-4073-ad2c-30c3db6052c1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:23:01.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1154" for this suite.
Feb 10 14:23:07.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:23:07.303: INFO: namespace projected-1154 deletion completed in 6.164780191s

• [SLOW TEST:16.478 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:23:07.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 10 14:23:07.402: INFO: Waiting up to 5m0s for pod "pod-f7625340-9836-4134-a352-2618cfab2a51" in namespace "emptydir-5992" to be "success or failure"
Feb 10 14:23:07.422: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51": Phase="Pending", Reason="", readiness=false. Elapsed: 20.172653ms
Feb 10 14:23:09.432: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029641476s
Feb 10 14:23:11.440: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038139187s
Feb 10 14:23:13.449: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046789023s
Feb 10 14:23:15.454: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052189715s
Feb 10 14:23:17.462: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060202903s
STEP: Saw pod success
Feb 10 14:23:17.462: INFO: Pod "pod-f7625340-9836-4134-a352-2618cfab2a51" satisfied condition "success or failure"
Feb 10 14:23:17.466: INFO: Trying to get logs from node iruya-node pod pod-f7625340-9836-4134-a352-2618cfab2a51 container test-container: 
STEP: delete the pod
Feb 10 14:23:17.529: INFO: Waiting for pod pod-f7625340-9836-4134-a352-2618cfab2a51 to disappear
Feb 10 14:23:17.538: INFO: Pod pod-f7625340-9836-4134-a352-2618cfab2a51 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:23:17.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5992" for this suite.
Feb 10 14:23:23.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:23:23.762: INFO: namespace emptydir-5992 deletion completed in 6.209992944s

• [SLOW TEST:16.459 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:23:23.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:23:23.936: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 10 14:23:28.946: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 10 14:23:34.960: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 10 14:23:35.039: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/deployments/test-cleanup-deployment,UID:1774ed01-ea79-4bea-aed2-d3545b6434f4,ResourceVersion:23830523,Generation:1,CreationTimestamp:2020-02-10 14:23:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 10 14:23:35.065: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/replicasets/test-cleanup-deployment-55bbcbc84c,UID:89c6162e-40f2-4f2a-80d9-51efed1d9e1b,ResourceVersion:23830525,Generation:1,CreationTimestamp:2020-02-10 14:23:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1774ed01-ea79-4bea-aed2-d3545b6434f4 0xc002d2cf57 0xc002d2cf58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 14:23:35.065: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 10 14:23:35.065: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/replicasets/test-cleanup-controller,UID:e5256a68-8fff-4f2c-9d27-437172c9279a,ResourceVersion:23830524,Generation:1,CreationTimestamp:2020-02-10 14:23:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1774ed01-ea79-4bea-aed2-d3545b6434f4 0xc002d2ce87 0xc002d2ce88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 10 14:23:35.079: INFO: Pod "test-cleanup-controller-f49dp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-f49dp,GenerateName:test-cleanup-controller-,Namespace:deployment-1110,SelfLink:/api/v1/namespaces/deployment-1110/pods/test-cleanup-controller-f49dp,UID:befe9cfb-22fd-4952-a8a6-b7683507e2de,ResourceVersion:23830521,Generation:0,CreationTimestamp:2020-02-10 14:23:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller e5256a68-8fff-4f2c-9d27-437172c9279a 0xc002d6d467 0xc002d6d468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rbhcp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rbhcp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rbhcp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6d4e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6d500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:23:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:23:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:23:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:23:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-10 14:23:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-10 14:23:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d756729f3852a2e4325cc377a970f6108df4cd9e6bd41f480601be87397e5ccd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 10 14:23:35.080: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vv47n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vv47n,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-1110,SelfLink:/api/v1/namespaces/deployment-1110/pods/test-cleanup-deployment-55bbcbc84c-vv47n,UID:7b908d91-9f09-4fc0-9025-c46de6f76278,ResourceVersion:23830529,Generation:0,CreationTimestamp:2020-02-10 14:23:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 89c6162e-40f2-4f2a-80d9-51efed1d9e1b 0xc002d6d5e7 0xc002d6d5e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rbhcp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rbhcp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rbhcp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6d660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6d680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:23:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:23:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1110" for this suite.
Feb 10 14:23:43.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:23:43.329: INFO: namespace deployment-1110 deletion completed in 8.156375783s

• [SLOW TEST:19.566 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:23:43.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 10 14:23:43.604: INFO: Waiting up to 5m0s for pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc" in namespace "containers-9314" to be "success or failure"
Feb 10 14:23:43.625: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.231374ms
Feb 10 14:23:45.629: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02453884s
Feb 10 14:23:47.637: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033034632s
Feb 10 14:23:49.645: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040876448s
Feb 10 14:23:51.665: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060805841s
Feb 10 14:23:53.677: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072315741s
STEP: Saw pod success
Feb 10 14:23:53.677: INFO: Pod "client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc" satisfied condition "success or failure"
Feb 10 14:23:53.681: INFO: Trying to get logs from node iruya-node pod client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc container test-container: 
STEP: delete the pod
Feb 10 14:23:53.848: INFO: Waiting for pod client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc to disappear
Feb 10 14:23:53.874: INFO: Pod client-containers-fb3f9203-601b-4b2c-8d35-ffd467b11bdc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:23:53.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9314" for this suite.
Feb 10 14:24:00.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:24:00.133: INFO: namespace containers-9314 deletion completed in 6.241887125s

• [SLOW TEST:16.804 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:24:00.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-882799ba-3801-4298-b1af-3cd8473a3777
STEP: Creating a pod to test consume configMaps
Feb 10 14:24:00.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd" in namespace "configmap-1873" to be "success or failure"
Feb 10 14:24:00.224: INFO: Pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.692533ms
Feb 10 14:24:02.231: INFO: Pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023179143s
Feb 10 14:24:05.011: INFO: Pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802346539s
Feb 10 14:24:07.022: INFO: Pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81337447s
Feb 10 14:24:09.030: INFO: Pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.821482863s
STEP: Saw pod success
Feb 10 14:24:09.030: INFO: Pod "pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd" satisfied condition "success or failure"
Feb 10 14:24:09.034: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd container configmap-volume-test: 
STEP: delete the pod
Feb 10 14:24:09.260: INFO: Waiting for pod pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd to disappear
Feb 10 14:24:09.267: INFO: Pod pod-configmaps-fb0b1422-5886-4e14-82a9-7233aa0e9ffd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:24:09.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1873" for this suite.
Feb 10 14:24:15.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:24:15.420: INFO: namespace configmap-1873 deletion completed in 6.145249518s

• [SLOW TEST:15.285 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:24:15.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0210 14:24:27.532683       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 10 14:24:27.532: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:24:27.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2687" for this suite.
Feb 10 14:24:42.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:24:46.460: INFO: namespace gc-2687 deletion completed in 17.815208964s

• [SLOW TEST:31.040 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:24:46.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8392.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8392.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 10 14:24:59.309: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.324: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.335: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.344: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.359: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.405: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.416: INFO: Unable to read jessie_udp@PodARecord from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.420: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5: the server could not find the requested resource (get pods dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5)
Feb 10 14:24:59.420: INFO: Lookups using dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 10 14:25:04.520: INFO: DNS probes using dns-8392/dns-test-6fe90ce0-bc99-405c-9595-a99ef84897a5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:25:04.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8392" for this suite.
Feb 10 14:25:10.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:25:10.770: INFO: namespace dns-8392 deletion completed in 6.194807808s

• [SLOW TEST:24.310 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:25:10.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:25:10.846: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 10 14:25:15.857: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 10 14:25:19.873: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 10 14:25:21.884: INFO: Creating deployment "test-rollover-deployment"
Feb 10 14:25:21.922: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 10 14:25:23.935: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 10 14:25:23.944: INFO: Ensure that both replica sets have 1 created replica
Feb 10 14:25:23.951: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 10 14:25:23.958: INFO: Updating deployment test-rollover-deployment
Feb 10 14:25:23.959: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 10 14:25:25.978: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 10 14:25:25.987: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 10 14:25:25.992: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:25.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:28.024: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:28.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:30.003: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:30.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:41.172: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:41.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:42.000: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:42.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:44.361: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:44.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:46.773: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:46.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:48.008: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:48.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:50.002: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:50.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:52.003: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:52.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:54.011: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:54.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:56.004: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:56.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:25:58.014: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:25:58.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:00.013: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:00.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:02.011: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:02.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:04.018: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:04.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:06.006: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:06.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:08.003: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:08.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:10.005: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:10.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:12.016: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:12.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:14.013: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:14.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:16.008: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:16.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:18.040: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:18.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:20.005: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:20.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:22.061: INFO: all replica sets need to contain the pod-template-hash label
Feb 10 14:26:22.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941524, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941521, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:26:24.081: INFO: 
Feb 10 14:26:24.081: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 10 14:26:24.149: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-781,SelfLink:/apis/apps/v1/namespaces/deployment-781/deployments/test-rollover-deployment,UID:218b6445-92bb-4bbd-ba64-7bc8f98b3b95,ResourceVersion:23831050,Generation:2,CreationTimestamp:2020-02-10 14:25:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-10 14:25:21 +0000 UTC 2020-02-10 14:25:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-10 14:26:23 +0000 UTC 2020-02-10 14:25:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 10 14:26:24.212: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-781,SelfLink:/apis/apps/v1/namespaces/deployment-781/replicasets/test-rollover-deployment-854595fc44,UID:fda4f918-089f-4ac9-a0ea-3eff0f81313b,ResourceVersion:23831035,Generation:2,CreationTimestamp:2020-02-10 14:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 218b6445-92bb-4bbd-ba64-7bc8f98b3b95 0xc0005ee077 0xc0005ee078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 10 14:26:24.213: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 10 14:26:24.213: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-781,SelfLink:/apis/apps/v1/namespaces/deployment-781/replicasets/test-rollover-controller,UID:efb1115e-29cb-4c6b-9b1c-833a1f1877b9,ResourceVersion:23831049,Generation:2,CreationTimestamp:2020-02-10 14:25:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 218b6445-92bb-4bbd-ba64-7bc8f98b3b95 0xc002d6df77 0xc002d6df78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 14:26:24.213: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-781,SelfLink:/apis/apps/v1/namespaces/deployment-781/replicasets/test-rollover-deployment-9b8b997cf,UID:5dbf8cf0-7770-41c0-ac62-7ef43ee36945,ResourceVersion:23830964,Generation:2,CreationTimestamp:2020-02-10 14:25:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 218b6445-92bb-4bbd-ba64-7bc8f98b3b95 0xc0005ee250 0xc0005ee251}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 14:26:24.225: INFO: Pod "test-rollover-deployment-854595fc44-6ntqt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-6ntqt,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-781,SelfLink:/api/v1/namespaces/deployment-781/pods/test-rollover-deployment-854595fc44-6ntqt,UID:20e642cf-d5ad-46e5-aa45-e45bfc269652,ResourceVersion:23830989,Generation:0,CreationTimestamp:2020-02-10 14:25:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 fda4f918-089f-4ac9-a0ea-3eff0f81313b 0xc002e1e1e7 0xc002e1e1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vtpcm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vtpcm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-vtpcm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e1e260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e1e280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:25:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:25:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:25:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:25:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-10 14:25:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-10 14:25:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ba0efdb8d44bfc2ca548ea8af2936e27e59c9ee9ad3aeb39d906e2a6b05c1d11}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:26:24.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-781" for this suite.
Feb 10 14:26:32.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:26:32.382: INFO: namespace deployment-781 deletion completed in 8.148522142s

• [SLOW TEST:81.611 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:26:32.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 10 14:26:51.416: INFO: Successfully updated pod "annotationupdatea6d20456-f4c8-473f-992c-9fad33858f2b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:26:53.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2020" for this suite.
Feb 10 14:27:33.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:27:33.689: INFO: namespace downward-api-2020 deletion completed in 40.168316417s

• [SLOW TEST:61.307 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:27:33.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 10 14:27:33.872: INFO: Waiting up to 5m0s for pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1" in namespace "downward-api-4422" to be "success or failure"
Feb 10 14:27:33.883: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.855464ms
Feb 10 14:27:35.896: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023615605s
Feb 10 14:27:37.905: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032803141s
Feb 10 14:27:39.920: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047258869s
Feb 10 14:27:41.926: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054179618s
Feb 10 14:27:43.938: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065399875s
Feb 10 14:27:45.947: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.074388229s
Feb 10 14:27:47.971: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.098952479s
Feb 10 14:27:49.980: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.107312245s
STEP: Saw pod success
Feb 10 14:27:49.980: INFO: Pod "downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1" satisfied condition "success or failure"
Feb 10 14:27:49.986: INFO: Trying to get logs from node iruya-node pod downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1 container dapi-container: 
STEP: delete the pod
Feb 10 14:27:50.042: INFO: Waiting for pod downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1 to disappear
Feb 10 14:27:50.062: INFO: Pod downward-api-6c6cfc91-bac4-4221-b11f-abdfd65debe1 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:27:50.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4422" for this suite.
Feb 10 14:27:56.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:27:56.407: INFO: namespace downward-api-4422 deletion completed in 6.203296807s

• [SLOW TEST:22.717 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:27:56.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:27:56.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3" in namespace "downward-api-2345" to be "success or failure"
Feb 10 14:27:56.577: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.609566ms
Feb 10 14:27:58.591: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040174766s
Feb 10 14:28:01.914: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.36224971s
Feb 10 14:28:03.931: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.379435521s
Feb 10 14:28:05.939: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.388057152s
Feb 10 14:28:07.950: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.398475005s
Feb 10 14:28:09.960: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.408241781s
Feb 10 14:28:11.970: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.41891688s
Feb 10 14:28:13.985: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.433593406s
STEP: Saw pod success
Feb 10 14:28:13.985: INFO: Pod "downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3" satisfied condition "success or failure"
Feb 10 14:28:13.989: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3 container client-container: 
STEP: delete the pod
Feb 10 14:28:14.226: INFO: Waiting for pod downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3 to disappear
Feb 10 14:28:14.246: INFO: Pod downwardapi-volume-d3ff2d06-d78e-48e2-bcff-ec6120e7f0e3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:28:14.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2345" for this suite.
Feb 10 14:28:20.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:28:20.410: INFO: namespace downward-api-2345 deletion completed in 6.157244957s

• [SLOW TEST:24.002 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:28:20.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 10 14:28:20.569: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:28:20.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-696" for this suite.
Feb 10 14:28:26.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:28:26.896: INFO: namespace kubectl-696 deletion completed in 6.23984942s

• [SLOW TEST:6.486 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:28:26.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-cgjw
STEP: Creating a pod to test atomic-volume-subpath
Feb 10 14:28:27.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cgjw" in namespace "subpath-3965" to be "success or failure"
Feb 10 14:28:27.114: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.661443ms
Feb 10 14:28:29.121: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023068794s
Feb 10 14:28:31.135: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037230137s
Feb 10 14:28:33.972: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.873602114s
Feb 10 14:28:35.978: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.880119942s
Feb 10 14:28:37.997: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.89917075s
Feb 10 14:28:40.046: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.948146908s
Feb 10 14:28:42.059: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.960612814s
Feb 10 14:28:44.071: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.973036233s
Feb 10 14:28:46.081: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 18.98321369s
Feb 10 14:28:48.090: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 20.992187363s
Feb 10 14:28:50.096: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 22.99775614s
Feb 10 14:28:52.101: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 25.003039675s
Feb 10 14:28:54.117: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 27.019148383s
Feb 10 14:28:56.133: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 29.035152896s
Feb 10 14:28:58.147: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 31.049144647s
Feb 10 14:29:00.156: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 33.057701438s
Feb 10 14:29:02.174: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 35.075561461s
Feb 10 14:29:04.182: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 37.084241139s
Feb 10 14:29:06.776: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 39.677853086s
Feb 10 14:29:08.785: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Running", Reason="", readiness=true. Elapsed: 41.687139854s
Feb 10 14:29:10.804: INFO: Pod "pod-subpath-test-projected-cgjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.705320017s
STEP: Saw pod success
Feb 10 14:29:10.804: INFO: Pod "pod-subpath-test-projected-cgjw" satisfied condition "success or failure"
Feb 10 14:29:10.807: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-cgjw container test-container-subpath-projected-cgjw: 
STEP: delete the pod
Feb 10 14:29:10.999: INFO: Waiting for pod pod-subpath-test-projected-cgjw to disappear
Feb 10 14:29:11.017: INFO: Pod pod-subpath-test-projected-cgjw no longer exists
STEP: Deleting pod pod-subpath-test-projected-cgjw
Feb 10 14:29:11.017: INFO: Deleting pod "pod-subpath-test-projected-cgjw" in namespace "subpath-3965"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:29:11.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3965" for this suite.
Feb 10 14:29:17.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:29:17.269: INFO: namespace subpath-3965 deletion completed in 6.230375267s

• [SLOW TEST:50.373 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:29:17.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7553.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7553.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 10 14:29:39.563: INFO: File wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-2d1e71c4-49a1-42d6-b686-4a0290c4a262 contains '' instead of 'foo.example.com.'
Feb 10 14:29:39.574: INFO: File jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-2d1e71c4-49a1-42d6-b686-4a0290c4a262 contains '' instead of 'foo.example.com.'
Feb 10 14:29:39.574: INFO: Lookups using dns-7553/dns-test-2d1e71c4-49a1-42d6-b686-4a0290c4a262 failed for: [wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local]

Feb 10 14:29:44.632: INFO: DNS probes using dns-test-2d1e71c4-49a1-42d6-b686-4a0290c4a262 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7553.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7553.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 10 14:30:15.405: INFO: File wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-d1af5232-b4af-4c25-b41d-043cb26bafe1 contains '' instead of 'bar.example.com.'
Feb 10 14:30:15.413: INFO: File jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-d1af5232-b4af-4c25-b41d-043cb26bafe1 contains '' instead of 'bar.example.com.'
Feb 10 14:30:15.413: INFO: Lookups using dns-7553/dns-test-d1af5232-b4af-4c25-b41d-043cb26bafe1 failed for: [wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local]

Feb 10 14:30:20.439: INFO: DNS probes using dns-test-d1af5232-b4af-4c25-b41d-043cb26bafe1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7553.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7553.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 10 14:30:48.896: INFO: File wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-5f9f0fa4-ea5f-4c15-88df-9691f48e48c7 contains '' instead of '10.96.5.87'
Feb 10 14:30:48.960: INFO: File jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-5f9f0fa4-ea5f-4c15-88df-9691f48e48c7 contains '' instead of '10.96.5.87'
Feb 10 14:30:48.960: INFO: Lookups using dns-7553/dns-test-5f9f0fa4-ea5f-4c15-88df-9691f48e48c7 failed for: [wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local jessie_udp@dns-test-service-3.dns-7553.svc.cluster.local]

Feb 10 14:30:54.082: INFO: File wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local from pod  dns-7553/dns-test-5f9f0fa4-ea5f-4c15-88df-9691f48e48c7 contains '' instead of '10.96.5.87'
Feb 10 14:30:54.102: INFO: Lookups using dns-7553/dns-test-5f9f0fa4-ea5f-4c15-88df-9691f48e48c7 failed for: [wheezy_udp@dns-test-service-3.dns-7553.svc.cluster.local]

Feb 10 14:30:59.042: INFO: DNS probes using dns-test-5f9f0fa4-ea5f-4c15-88df-9691f48e48c7 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:30:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7553" for this suite.
Feb 10 14:31:07.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:31:07.772: INFO: namespace dns-7553 deletion completed in 8.467190484s

• [SLOW TEST:110.502 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:31:07.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 10 14:31:07.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268" in namespace "downward-api-6763" to be "success or failure"
Feb 10 14:31:08.161: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 166.597583ms
Feb 10 14:31:10.175: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180114807s
Feb 10 14:31:12.187: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192645999s
Feb 10 14:31:15.034: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 7.039034123s
Feb 10 14:31:17.050: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 9.054934316s
Feb 10 14:31:19.061: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 11.066175504s
Feb 10 14:31:21.077: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 13.082683408s
Feb 10 14:31:23.164: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 15.169848979s
Feb 10 14:31:25.756: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761162495s
Feb 10 14:31:27.767: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.77238823s
STEP: Saw pod success
Feb 10 14:31:27.767: INFO: Pod "downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268" satisfied condition "success or failure"
Feb 10 14:31:27.789: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268 container client-container: 
STEP: delete the pod
Feb 10 14:31:28.008: INFO: Waiting for pod downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268 to disappear
Feb 10 14:31:28.141: INFO: Pod downwardapi-volume-264628d5-3b4c-4080-afff-2ea40a38d268 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:31:28.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6763" for this suite.
Feb 10 14:31:36.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:31:36.420: INFO: namespace downward-api-6763 deletion completed in 8.217997698s

• [SLOW TEST:28.648 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:31:36.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 10 14:31:36.505: INFO: PodSpec: initContainers in spec.initContainers
Feb 10 14:33:01.211: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7d13f791-e485-4785-bbbd-f9d411211055", GenerateName:"", Namespace:"init-container-7999", SelfLink:"/api/v1/namespaces/init-container-7999/pods/pod-init-7d13f791-e485-4785-bbbd-f9d411211055", UID:"98a3eb4c-f763-4db1-b401-6f62082295a9", ResourceVersion:"23831883", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716941896, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"505785097"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tj2tc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025e8000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tj2tc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tj2tc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tj2tc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0017600d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00208a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001760210)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001760230)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001760238), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00176023c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941897, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941897, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941897, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716941896, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0021460a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025405b0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002540620)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c45d9e8ff4de6f4ac75f355bba7ac5b569102e8a64febfbf599f99b28c188f9a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002146100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021460c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:33:01.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7999" for this suite.
Feb 10 14:33:25.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:33:25.710: INFO: namespace init-container-7999 deletion completed in 24.451417426s

• [SLOW TEST:109.289 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:33:25.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 10 14:33:26.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8574,SelfLink:/api/v1/namespaces/watch-8574/configmaps/e2e-watch-test-label-changed,UID:a10df97a-40a1-47db-be61-cbbfc521f683,ResourceVersion:23831939,Generation:0,CreationTimestamp:2020-02-10 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 10 14:33:26.197: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8574,SelfLink:/api/v1/namespaces/watch-8574/configmaps/e2e-watch-test-label-changed,UID:a10df97a-40a1-47db-be61-cbbfc521f683,ResourceVersion:23831940,Generation:0,CreationTimestamp:2020-02-10 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 10 14:33:26.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8574,SelfLink:/api/v1/namespaces/watch-8574/configmaps/e2e-watch-test-label-changed,UID:a10df97a-40a1-47db-be61-cbbfc521f683,ResourceVersion:23831941,Generation:0,CreationTimestamp:2020-02-10 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 10 14:33:36.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8574,SelfLink:/api/v1/namespaces/watch-8574/configmaps/e2e-watch-test-label-changed,UID:a10df97a-40a1-47db-be61-cbbfc521f683,ResourceVersion:23831956,Generation:0,CreationTimestamp:2020-02-10 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 10 14:33:36.328: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8574,SelfLink:/api/v1/namespaces/watch-8574/configmaps/e2e-watch-test-label-changed,UID:a10df97a-40a1-47db-be61-cbbfc521f683,ResourceVersion:23831957,Generation:0,CreationTimestamp:2020-02-10 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 10 14:33:36.328: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8574,SelfLink:/api/v1/namespaces/watch-8574/configmaps/e2e-watch-test-label-changed,UID:a10df97a-40a1-47db-be61-cbbfc521f683,ResourceVersion:23831958,Generation:0,CreationTimestamp:2020-02-10 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:33:36.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8574" for this suite.
Feb 10 14:33:42.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:33:42.490: INFO: namespace watch-8574 deletion completed in 6.144375898s

• [SLOW TEST:16.778 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:33:42.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:33:42.643: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.037284ms)
Feb 10 14:33:42.651: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.659747ms)
Feb 10 14:33:42.658: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.516057ms)
Feb 10 14:33:42.666: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.368689ms)
Feb 10 14:33:42.671: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.867269ms)
Feb 10 14:33:42.678: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.440234ms)
Feb 10 14:33:42.685: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.267455ms)
Feb 10 14:33:42.690: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.295889ms)
Feb 10 14:33:42.697: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.247667ms)
Feb 10 14:33:42.703: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.616805ms)
Feb 10 14:33:42.710: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.072488ms)
Feb 10 14:33:42.716: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.392637ms)
Feb 10 14:33:42.784: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 67.754025ms)
Feb 10 14:33:42.794: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.733032ms)
Feb 10 14:33:42.800: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.723794ms)
Feb 10 14:33:42.807: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.983798ms)
Feb 10 14:33:42.814: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.052523ms)
Feb 10 14:33:42.822: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.448164ms)
Feb 10 14:33:42.829: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.471369ms)
Feb 10 14:33:42.835: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.917463ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:33:42.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8168" for this suite.
Feb 10 14:33:48.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:33:48.986: INFO: namespace proxy-8168 deletion completed in 6.146562773s

• [SLOW TEST:6.495 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:33:48.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 10 14:33:49.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4823,SelfLink:/api/v1/namespaces/watch-4823/configmaps/e2e-watch-test-watch-closed,UID:1ffaaac4-78c1-4a61-a915-c9f710c2f115,ResourceVersion:23831991,Generation:0,CreationTimestamp:2020-02-10 14:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 10 14:33:49.182: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4823,SelfLink:/api/v1/namespaces/watch-4823/configmaps/e2e-watch-test-watch-closed,UID:1ffaaac4-78c1-4a61-a915-c9f710c2f115,ResourceVersion:23831992,Generation:0,CreationTimestamp:2020-02-10 14:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 10 14:33:49.323: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4823,SelfLink:/api/v1/namespaces/watch-4823/configmaps/e2e-watch-test-watch-closed,UID:1ffaaac4-78c1-4a61-a915-c9f710c2f115,ResourceVersion:23831993,Generation:0,CreationTimestamp:2020-02-10 14:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 10 14:33:49.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4823,SelfLink:/api/v1/namespaces/watch-4823/configmaps/e2e-watch-test-watch-closed,UID:1ffaaac4-78c1-4a61-a915-c9f710c2f115,ResourceVersion:23831994,Generation:0,CreationTimestamp:2020-02-10 14:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:33:49.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4823" for this suite.
Feb 10 14:33:57.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:33:57.469: INFO: namespace watch-4823 deletion completed in 8.127584011s

• [SLOW TEST:8.482 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:33:57.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 10 14:33:57.658: INFO: Waiting up to 5m0s for pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c" in namespace "containers-194" to be "success or failure"
Feb 10 14:33:57.677: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.80594ms
Feb 10 14:33:59.691: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032449137s
Feb 10 14:34:01.918: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260362363s
Feb 10 14:34:03.933: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275255172s
Feb 10 14:34:05.941: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283208s
Feb 10 14:34:07.952: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293654247s
Feb 10 14:34:09.964: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.305557589s
Feb 10 14:34:11.975: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.316895221s
Feb 10 14:34:13.984: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.325746566s
STEP: Saw pod success
Feb 10 14:34:13.984: INFO: Pod "client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c" satisfied condition "success or failure"
Feb 10 14:34:13.987: INFO: Trying to get logs from node iruya-node pod client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c container test-container: 
STEP: delete the pod
Feb 10 14:34:14.195: INFO: Waiting for pod client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c to disappear
Feb 10 14:34:14.209: INFO: Pod client-containers-b74d7ed0-134b-453a-af80-3cadfa5b833c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:34:14.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-194" for this suite.
Feb 10 14:34:20.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:34:20.457: INFO: namespace containers-194 deletion completed in 6.239024126s

• [SLOW TEST:22.988 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:34:20.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:34:20.647: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 10 14:34:20.711: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 10 14:34:25.725: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 10 14:34:33.967: INFO: Creating deployment "test-rolling-update-deployment"
Feb 10 14:34:33.981: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 10 14:34:33.999: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 10 14:34:36.021: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 10 14:34:36.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:34:38.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:34:40.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:34:42.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:34:44.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:34:46.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716942074, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 10 14:34:48.032: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 10 14:34:48.042: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8261,SelfLink:/apis/apps/v1/namespaces/deployment-8261/deployments/test-rolling-update-deployment,UID:75655577-0c9a-4a18-9201-1f60f7cde843,ResourceVersion:23832146,Generation:1,CreationTimestamp:2020-02-10 14:34:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-10 14:34:34 +0000 UTC 2020-02-10 14:34:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-10 14:34:47 +0000 UTC 2020-02-10 14:34:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 10 14:34:48.046: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8261,SelfLink:/apis/apps/v1/namespaces/deployment-8261/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:ded0b292-046a-4659-a0a5-e4ed65bec83b,ResourceVersion:23832134,Generation:1,CreationTimestamp:2020-02-10 14:34:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 75655577-0c9a-4a18-9201-1f60f7cde843 0xc002c864e7 0xc002c864e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 10 14:34:48.046: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 10 14:34:48.046: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8261,SelfLink:/apis/apps/v1/namespaces/deployment-8261/replicasets/test-rolling-update-controller,UID:c4b681d5-5628-4a5f-8ffb-af73693626d7,ResourceVersion:23832145,Generation:2,CreationTimestamp:2020-02-10 14:34:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 75655577-0c9a-4a18-9201-1f60f7cde843 0xc002c86417 0xc002c86418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 10 14:34:48.050: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-rb5sj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-rb5sj,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8261,SelfLink:/api/v1/namespaces/deployment-8261/pods/test-rolling-update-deployment-79f6b9d75c-rb5sj,UID:789ba6e4-75c4-438d-ba10-483446321407,ResourceVersion:23832133,Generation:0,CreationTimestamp:2020-02-10 14:34:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c ded0b292-046a-4659-a0a5-e4ed65bec83b 0xc002c86de7 0xc002c86de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgthv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgthv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bgthv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c86e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c86e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:34:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:34:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:34:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-10 14:34:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-10 14:34:34 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-10 14:34:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4a3444d6723a54206e1f46390577653037f1837ceed2fd4b3cf847cfdec5453d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:34:48.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8261" for this suite.
Feb 10 14:34:56.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:34:56.296: INFO: namespace deployment-8261 deletion completed in 8.242297728s

• [SLOW TEST:35.838 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:34:56.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8804a827-f384-452a-a2fe-68f73aab450b
STEP: Creating a pod to test consume configMaps
Feb 10 14:34:56.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001" in namespace "projected-9720" to be "success or failure"
Feb 10 14:34:56.692: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 13.514912ms
Feb 10 14:34:58.702: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023006667s
Feb 10 14:35:00.709: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029578199s
Feb 10 14:35:02.725: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046483033s
Feb 10 14:35:04.739: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060091162s
Feb 10 14:35:06.748: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068753156s
Feb 10 14:35:08.758: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 12.079545737s
Feb 10 14:35:10.764: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 14.085003456s
Feb 10 14:35:12.775: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 16.095838238s
Feb 10 14:35:14.827: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 18.148187201s
Feb 10 14:35:16.844: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Pending", Reason="", readiness=false. Elapsed: 20.164663806s
Feb 10 14:35:18.855: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.176269912s
STEP: Saw pod success
Feb 10 14:35:18.855: INFO: Pod "pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001" satisfied condition "success or failure"
Feb 10 14:35:18.864: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 10 14:35:19.596: INFO: Waiting for pod pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001 to disappear
Feb 10 14:35:19.634: INFO: Pod pod-projected-configmaps-16937ea2-b347-4d9b-9b26-a514bdee6001 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:35:19.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9720" for this suite.
Feb 10 14:35:25.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:35:25.836: INFO: namespace projected-9720 deletion completed in 6.183353932s

• [SLOW TEST:29.540 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:35:25.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 10 14:35:26.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 10 14:35:26.302: INFO: stderr: ""
Feb 10 14:35:26.302: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:35:26.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4899" for this suite.
Feb 10 14:35:34.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:35:34.437: INFO: namespace kubectl-4899 deletion completed in 8.126515615s

• [SLOW TEST:8.601 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:35:34.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 10 14:35:49.263: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2893 pod-service-account-52ff65fc-bb4f-4b31-80c5-d4d2c92e84d9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 10 14:35:51.938: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2893 pod-service-account-52ff65fc-bb4f-4b31-80c5-d4d2c92e84d9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 10 14:35:52.586: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2893 pod-service-account-52ff65fc-bb4f-4b31-80c5-d4d2c92e84d9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:35:53.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2893" for this suite.
Feb 10 14:35:59.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:35:59.340: INFO: namespace svcaccounts-2893 deletion completed in 6.283647423s

• [SLOW TEST:24.903 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:35:59.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 14:35:59.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1902'
Feb 10 14:35:59.722: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 10 14:35:59.722: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 10 14:36:03.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1902'
Feb 10 14:36:04.409: INFO: stderr: ""
Feb 10 14:36:04.409: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:36:04.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1902" for this suite.
Feb 10 14:36:26.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:36:26.597: INFO: namespace kubectl-1902 deletion completed in 22.180046637s

• [SLOW TEST:27.256 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:36:26.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 10 14:36:26.750: INFO: Waiting up to 5m0s for pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd" in namespace "var-expansion-3252" to be "success or failure"
Feb 10 14:36:26.768: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.345577ms
Feb 10 14:36:28.774: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023388851s
Feb 10 14:36:30.781: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030096964s
Feb 10 14:36:32.793: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042951635s
Feb 10 14:36:34.804: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053976907s
Feb 10 14:36:36.872: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.121391004s
Feb 10 14:36:38.884: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.133040874s
Feb 10 14:36:41.084: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.333468137s
Feb 10 14:36:43.114: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.363490083s
STEP: Saw pod success
Feb 10 14:36:43.114: INFO: Pod "var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd" satisfied condition "success or failure"
Feb 10 14:36:43.118: INFO: Trying to get logs from node iruya-node pod var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd container dapi-container: 
STEP: delete the pod
Feb 10 14:36:43.243: INFO: Waiting for pod var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd to disappear
Feb 10 14:36:43.264: INFO: Pod var-expansion-4cbc331b-ff36-4a22-ace8-60e9fdc831dd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:36:43.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3252" for this suite.
Feb 10 14:36:51.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:36:51.481: INFO: namespace var-expansion-3252 deletion completed in 8.211044265s

• [SLOW TEST:24.883 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:36:51.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 14:36:51.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1375'
Feb 10 14:36:51.959: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 10 14:36:51.959: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 10 14:36:52.119: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-tsd2n]
Feb 10 14:36:52.119: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-tsd2n" in namespace "kubectl-1375" to be "running and ready"
Feb 10 14:36:52.129: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 9.774397ms
Feb 10 14:36:54.135: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015550933s
Feb 10 14:36:56.143: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023942558s
Feb 10 14:36:58.152: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032676139s
Feb 10 14:37:00.168: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048021495s
Feb 10 14:37:02.184: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06404205s
Feb 10 14:37:04.195: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.074988865s
Feb 10 14:37:06.206: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.086413146s
Feb 10 14:37:08.218: INFO: Pod "e2e-test-nginx-rc-tsd2n": Phase="Running", Reason="", readiness=true. Elapsed: 16.098214814s
Feb 10 14:37:08.218: INFO: Pod "e2e-test-nginx-rc-tsd2n" satisfied condition "running and ready"
Feb 10 14:37:08.218: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-tsd2n]
Feb 10 14:37:08.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1375'
Feb 10 14:37:08.430: INFO: stderr: ""
Feb 10 14:37:08.430: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 10 14:37:08.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1375'
Feb 10 14:37:08.723: INFO: stderr: ""
Feb 10 14:37:08.724: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:37:08.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1375" for this suite.
Feb 10 14:37:32.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:37:32.886: INFO: namespace kubectl-1375 deletion completed in 24.15539542s

• [SLOW TEST:41.404 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:37:32.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5b0ad1a6-a832-44ff-b8e0-3a966540c7bc
STEP: Creating a pod to test consume secrets
Feb 10 14:37:33.126: INFO: Waiting up to 5m0s for pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964" in namespace "secrets-1997" to be "success or failure"
Feb 10 14:37:33.133: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 6.96638ms
Feb 10 14:37:35.152: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025410012s
Feb 10 14:37:37.167: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040979641s
Feb 10 14:37:39.175: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0487793s
Feb 10 14:37:41.334: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207756366s
Feb 10 14:37:43.345: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219310196s
Feb 10 14:37:46.158: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Pending", Reason="", readiness=false. Elapsed: 13.03197184s
Feb 10 14:37:48.168: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Running", Reason="", readiness=true. Elapsed: 15.041542687s
Feb 10 14:37:50.181: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.054711278s
STEP: Saw pod success
Feb 10 14:37:50.181: INFO: Pod "pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964" satisfied condition "success or failure"
Feb 10 14:37:50.185: INFO: Trying to get logs from node iruya-node pod pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964 container secret-volume-test: 
STEP: delete the pod
Feb 10 14:37:50.372: INFO: Waiting for pod pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964 to disappear
Feb 10 14:37:50.397: INFO: Pod pod-secrets-de27d6d5-44c8-469f-950d-24a1501d0964 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:37:50.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1997" for this suite.
Feb 10 14:37:56.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:37:56.632: INFO: namespace secrets-1997 deletion completed in 6.224813303s

• [SLOW TEST:23.746 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:37:56.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-5c795ba0-a7b2-4712-ba4b-37460dc9bff7
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:37:56.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-956" for this suite.
Feb 10 14:38:02.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:38:02.932: INFO: namespace configmap-956 deletion completed in 6.128509961s

• [SLOW TEST:6.299 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:38:02.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:38:23.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-235" for this suite.
Feb 10 14:38:29.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:38:29.343: INFO: namespace kubelet-test-235 deletion completed in 6.151101414s

• [SLOW TEST:26.412 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:38:29.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 10 14:38:50.176: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:38:50.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1226" for this suite.
Feb 10 14:38:56.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:38:56.486: INFO: namespace container-runtime-1226 deletion completed in 6.14162159s

• [SLOW TEST:27.141 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:38:56.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 10 14:39:13.137: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:39:13.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7446" for this suite.
Feb 10 14:39:21.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:39:21.530: INFO: namespace container-runtime-7446 deletion completed in 8.205013494s

• [SLOW TEST:25.043 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:39:21.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-f511b9b8-f9d9-4d35-a1bc-dc7bdec73ef0
STEP: Creating secret with name secret-projected-all-test-volume-f24ceb93-7d57-4b94-bcfb-d58489a51d7d
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 10 14:39:21.807: INFO: Waiting up to 5m0s for pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236" in namespace "projected-910" to be "success or failure"
Feb 10 14:39:21.918: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 110.812577ms
Feb 10 14:39:23.936: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128681785s
Feb 10 14:39:25.952: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145343096s
Feb 10 14:39:27.969: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161742744s
Feb 10 14:39:30.037: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230224903s
Feb 10 14:39:32.046: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238502832s
Feb 10 14:39:34.067: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 12.259644656s
Feb 10 14:39:36.076: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Pending", Reason="", readiness=false. Elapsed: 14.268996564s
Feb 10 14:39:38.142: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.334519988s
STEP: Saw pod success
Feb 10 14:39:38.142: INFO: Pod "projected-volume-b19800e9-f48f-4680-8658-4dc163db1236" satisfied condition "success or failure"
Feb 10 14:39:38.147: INFO: Trying to get logs from node iruya-node pod projected-volume-b19800e9-f48f-4680-8658-4dc163db1236 container projected-all-volume-test: 
STEP: delete the pod
Feb 10 14:39:38.358: INFO: Waiting for pod projected-volume-b19800e9-f48f-4680-8658-4dc163db1236 to disappear
Feb 10 14:39:38.373: INFO: Pod projected-volume-b19800e9-f48f-4680-8658-4dc163db1236 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:39:38.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-910" for this suite.
Feb 10 14:39:44.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:39:44.751: INFO: namespace projected-910 deletion completed in 6.369254797s

• [SLOW TEST:23.221 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:39:44.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-fc44546d-2ac6-44b4-bdec-cceedb782bf3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:39:44.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2735" for this suite.
Feb 10 14:39:50.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:39:51.032: INFO: namespace secrets-2735 deletion completed in 6.125130009s

• [SLOW TEST:6.280 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:39:51.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-12afb443-92b4-4833-8ddc-d2a5ab1775da
STEP: Creating a pod to test consume secrets
Feb 10 14:39:51.205: INFO: Waiting up to 5m0s for pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e" in namespace "secrets-7854" to be "success or failure"
Feb 10 14:39:51.222: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.55925ms
Feb 10 14:39:53.230: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02485118s
Feb 10 14:39:55.237: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03198429s
Feb 10 14:39:57.256: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050669615s
Feb 10 14:39:59.263: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057814637s
Feb 10 14:40:01.277: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072162144s
Feb 10 14:40:03.286: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.081019386s
Feb 10 14:40:05.295: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.090052159s
Feb 10 14:40:07.302: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.09713671s
Feb 10 14:40:09.320: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.115053282s
STEP: Saw pod success
Feb 10 14:40:09.320: INFO: Pod "pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e" satisfied condition "success or failure"
Feb 10 14:40:09.334: INFO: Trying to get logs from node iruya-node pod pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e container secret-volume-test: 
STEP: delete the pod
Feb 10 14:40:10.082: INFO: Waiting for pod pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e to disappear
Feb 10 14:40:10.092: INFO: Pod pod-secrets-9a84516a-edc7-4ab6-929a-a7fe49392a9e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:40:10.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7854" for this suite.
Feb 10 14:40:16.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:40:16.288: INFO: namespace secrets-7854 deletion completed in 6.117798156s

• [SLOW TEST:25.256 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:40:16.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 10 14:40:16.434: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 10 14:40:16.454: INFO: Waiting for terminating namespaces to be deleted...
Feb 10 14:40:16.456: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 10 14:40:16.470: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.470: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 10 14:40:16.470: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 10 14:40:16.470: INFO: 	Container weave ready: true, restart count 0
Feb 10 14:40:16.470: INFO: 	Container weave-npc ready: true, restart count 0
Feb 10 14:40:16.470: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 10 14:40:16.484: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 10 14:40:16.484: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 10 14:40:16.484: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container coredns ready: true, restart count 0
Feb 10 14:40:16.484: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container etcd ready: true, restart count 0
Feb 10 14:40:16.484: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container weave ready: true, restart count 0
Feb 10 14:40:16.484: INFO: 	Container weave-npc ready: true, restart count 0
Feb 10 14:40:16.484: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container coredns ready: true, restart count 0
Feb 10 14:40:16.484: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 10 14:40:16.484: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 10 14:40:16.484: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 10 14:40:16.678: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 10 14:40:16.678: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6.15f21175c6c0e1c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5010/filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6.15f21177a838fd3b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6.15f211797267e838], Reason = [Created], Message = [Created container filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6.15f211799f24ff98], Reason = [Started], Message = [Started container filler-pod-425fb44f-d1da-4b06-96b9-8a221e35a7e6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4.15f21175d2066865], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5010/filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4.15f21177ef62ce34], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4.15f21179bff6fcd3], Reason = [Created], Message = [Created container filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4.15f21179e83462c2], Reason = [Started], Message = [Started container filler-pod-a9a64ffe-ffdd-4a6a-8a3f-0bc12eb4c7c4]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f2117a7dd79f6d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:40:38.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5010" for this suite.
Feb 10 14:40:46.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:40:47.270: INFO: namespace sched-pred-5010 deletion completed in 9.140211143s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:30.982 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:40:47.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 14:40:47.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3300'
Feb 10 14:40:48.151: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 10 14:40:48.151: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 10 14:40:50.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3300'
Feb 10 14:40:52.048: INFO: stderr: ""
Feb 10 14:40:52.048: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:40:52.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3300" for this suite.
Feb 10 14:41:14.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:41:14.866: INFO: namespace kubectl-3300 deletion completed in 22.809645043s

• [SLOW TEST:27.595 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:41:14.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 10 14:41:35.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-1983c4d0-d455-434e-89bd-684036cd4eaf -c busybox-main-container --namespace=emptydir-1624 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 10 14:41:35.756: INFO: stderr: "I0210 14:41:35.396236    2833 log.go:172] (0xc000126dc0) (0xc000676be0) Create stream\nI0210 14:41:35.396313    2833 log.go:172] (0xc000126dc0) (0xc000676be0) Stream added, broadcasting: 1\nI0210 14:41:35.403824    2833 log.go:172] (0xc000126dc0) Reply frame received for 1\nI0210 14:41:35.403850    2833 log.go:172] (0xc000126dc0) (0xc0007d8000) Create stream\nI0210 14:41:35.403857    2833 log.go:172] (0xc000126dc0) (0xc0007d8000) Stream added, broadcasting: 3\nI0210 14:41:35.405098    2833 log.go:172] (0xc000126dc0) Reply frame received for 3\nI0210 14:41:35.405126    2833 log.go:172] (0xc000126dc0) (0xc0007d80a0) Create stream\nI0210 14:41:35.405134    2833 log.go:172] (0xc000126dc0) (0xc0007d80a0) Stream added, broadcasting: 5\nI0210 14:41:35.407695    2833 log.go:172] (0xc000126dc0) Reply frame received for 5\nI0210 14:41:35.586213    2833 log.go:172] (0xc000126dc0) Data frame received for 3\nI0210 14:41:35.586778    2833 log.go:172] (0xc0007d8000) (3) Data frame handling\nI0210 14:41:35.586848    2833 log.go:172] (0xc0007d8000) (3) Data frame sent\nI0210 14:41:35.748601    2833 log.go:172] (0xc000126dc0) (0xc0007d8000) Stream removed, broadcasting: 3\nI0210 14:41:35.748699    2833 log.go:172] (0xc000126dc0) Data frame received for 1\nI0210 14:41:35.748710    2833 log.go:172] (0xc000676be0) (1) Data frame handling\nI0210 14:41:35.748717    2833 log.go:172] (0xc000676be0) (1) Data frame sent\nI0210 14:41:35.748765    2833 log.go:172] (0xc000126dc0) (0xc000676be0) Stream removed, broadcasting: 1\nI0210 14:41:35.748862    2833 log.go:172] (0xc000126dc0) (0xc0007d80a0) Stream removed, broadcasting: 5\nI0210 14:41:35.748970    2833 log.go:172] (0xc000126dc0) Go away received\nI0210 14:41:35.749204    2833 log.go:172] (0xc000126dc0) (0xc000676be0) Stream removed, broadcasting: 1\nI0210 14:41:35.749234    2833 log.go:172] (0xc000126dc0) (0xc0007d8000) Stream removed, broadcasting: 3\nI0210 14:41:35.749239    2833 log.go:172] (0xc000126dc0) (0xc0007d80a0) Stream removed, broadcasting: 5\n"
Feb 10 14:41:35.757: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:41:35.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1624" for this suite.
Feb 10 14:41:41.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:41:41.965: INFO: namespace emptydir-1624 deletion completed in 6.196676237s

• [SLOW TEST:27.099 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:41:41.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 10 14:41:42.123: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 10 14:41:42.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8924'
Feb 10 14:41:42.813: INFO: stderr: ""
Feb 10 14:41:42.813: INFO: stdout: "service/redis-slave created\n"
Feb 10 14:41:42.814: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 10 14:41:42.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8924'
Feb 10 14:41:43.444: INFO: stderr: ""
Feb 10 14:41:43.444: INFO: stdout: "service/redis-master created\n"
Feb 10 14:41:43.444: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 10 14:41:43.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8924'
Feb 10 14:41:44.218: INFO: stderr: ""
Feb 10 14:41:44.218: INFO: stdout: "service/frontend created\n"
Feb 10 14:41:44.218: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 10 14:41:44.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8924'
Feb 10 14:41:44.736: INFO: stderr: ""
Feb 10 14:41:44.736: INFO: stdout: "deployment.apps/frontend created\n"
Feb 10 14:41:44.736: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 10 14:41:44.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8924'
Feb 10 14:41:45.384: INFO: stderr: ""
Feb 10 14:41:45.384: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 10 14:41:45.385: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 10 14:41:45.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8924'
Feb 10 14:41:48.084: INFO: stderr: ""
Feb 10 14:41:48.084: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 10 14:41:48.084: INFO: Waiting for all frontend pods to be Running.
Feb 10 14:42:28.138: INFO: Waiting for frontend to serve content.
Feb 10 14:42:28.411: INFO: Trying to add a new entry to the guestbook.
Feb 10 14:42:28.504: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 10 14:42:28.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8924'
Feb 10 14:42:28.792: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:42:28.792: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 10 14:42:28.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8924'
Feb 10 14:42:29.142: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:42:29.142: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 10 14:42:29.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8924'
Feb 10 14:42:29.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:42:29.415: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 10 14:42:29.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8924'
Feb 10 14:42:29.599: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:42:29.599: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 10 14:42:29.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8924'
Feb 10 14:42:29.776: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:42:29.776: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 10 14:42:29.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8924'
Feb 10 14:42:30.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 10 14:42:30.186: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:42:30.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8924" for this suite.
Feb 10 14:43:20.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:43:20.518: INFO: namespace kubectl-8924 deletion completed in 50.234465285s

• [SLOW TEST:98.552 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:43:20.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 14:43:21.092: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f98792e8-b532-4bf2-a055-198f64bae7eb", Controller:(*bool)(0xc002e7e0d2), BlockOwnerDeletion:(*bool)(0xc002e7e0d3)}}
Feb 10 14:43:21.165: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e4db42a5-af5d-41f2-9bf5-55ca62d6bc82", Controller:(*bool)(0xc0019ff21a), BlockOwnerDeletion:(*bool)(0xc0019ff21b)}}
Feb 10 14:43:21.335: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"633b3763-f5bd-4c32-a01e-14c2980dcd5c", Controller:(*bool)(0xc0024d19f2), BlockOwnerDeletion:(*bool)(0xc0024d19f3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:43:31.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6813" for this suite.
Feb 10 14:43:37.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:43:37.534: INFO: namespace gc-6813 deletion completed in 6.162695772s

• [SLOW TEST:17.015 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:43:37.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-aed602d3-bb2a-452c-b736-2d18e1700f17
STEP: Creating a pod to test consume configMaps
Feb 10 14:43:37.883: INFO: Waiting up to 5m0s for pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b" in namespace "configmap-2198" to be "success or failure"
Feb 10 14:43:37.899: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.139007ms
Feb 10 14:43:39.913: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029149818s
Feb 10 14:43:41.927: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043718357s
Feb 10 14:43:43.942: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058716349s
Feb 10 14:43:45.950: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066457553s
Feb 10 14:43:47.960: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076422708s
Feb 10 14:43:49.966: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.082246679s
Feb 10 14:43:51.981: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.097326556s
Feb 10 14:43:53.992: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.108265919s
STEP: Saw pod success
Feb 10 14:43:53.992: INFO: Pod "pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b" satisfied condition "success or failure"
Feb 10 14:43:54.006: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b container configmap-volume-test: 
STEP: delete the pod
Feb 10 14:43:54.096: INFO: Waiting for pod pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b to disappear
Feb 10 14:43:54.099: INFO: Pod pod-configmaps-9168cd1f-5478-430a-996a-346bf1cca07b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:43:54.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2198" for this suite.
Feb 10 14:44:00.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:44:00.349: INFO: namespace configmap-2198 deletion completed in 6.244349051s

• [SLOW TEST:22.815 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:44:00.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9652
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9652 to expose endpoints map[]
Feb 10 14:44:02.923: INFO: Get endpoints failed (15.271262ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 10 14:44:03.938: INFO: successfully validated that service endpoint-test2 in namespace services-9652 exposes endpoints map[] (1.02985857s elapsed)
STEP: Creating pod pod1 in namespace services-9652
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9652 to expose endpoints map[pod1:[80]]
Feb 10 14:44:08.032: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.079959003s elapsed, will retry)
Feb 10 14:44:13.227: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.275305184s elapsed, will retry)
Feb 10 14:44:17.274: INFO: successfully validated that service endpoint-test2 in namespace services-9652 exposes endpoints map[pod1:[80]] (13.321453669s elapsed)
STEP: Creating pod pod2 in namespace services-9652
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9652 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 10 14:44:22.839: INFO: Unexpected endpoints: found map[91afdeb2-8a2d-455e-b7cc-926a53d50c19:[80]], expected map[pod1:[80] pod2:[80]] (5.559827126s elapsed, will retry)
Feb 10 14:44:31.472: INFO: successfully validated that service endpoint-test2 in namespace services-9652 exposes endpoints map[pod1:[80] pod2:[80]] (14.192915767s elapsed)
STEP: Deleting pod pod1 in namespace services-9652
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9652 to expose endpoints map[pod2:[80]]
Feb 10 14:44:32.569: INFO: successfully validated that service endpoint-test2 in namespace services-9652 exposes endpoints map[pod2:[80]] (1.088142472s elapsed)
STEP: Deleting pod pod2 in namespace services-9652
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9652 to expose endpoints map[]
Feb 10 14:44:36.038: INFO: successfully validated that service endpoint-test2 in namespace services-9652 exposes endpoints map[] (3.462425595s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:44:37.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9652" for this suite.
Feb 10 14:45:01.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:45:02.040: INFO: namespace services-9652 deletion completed in 24.137231838s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:61.690 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:45:02.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-dcbc4111-1300-4bee-9267-b6ddbf1663dd
STEP: Creating a pod to test consume secrets
Feb 10 14:45:02.491: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8" in namespace "projected-3094" to be "success or failure"
Feb 10 14:45:02.504: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.330752ms
Feb 10 14:45:04.515: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023991574s
Feb 10 14:45:06.533: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041464886s
Feb 10 14:45:08.546: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054872902s
Feb 10 14:45:10.554: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062673962s
Feb 10 14:45:12.567: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07586156s
Feb 10 14:45:15.566: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.07511495s
Feb 10 14:45:17.579: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.087805697s
Feb 10 14:45:19.588: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.096724807s
Feb 10 14:45:21.596: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.105072227s
STEP: Saw pod success
Feb 10 14:45:21.596: INFO: Pod "pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8" satisfied condition "success or failure"
Feb 10 14:45:21.600: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8 container projected-secret-volume-test: 
STEP: delete the pod
Feb 10 14:45:21.896: INFO: Waiting for pod pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8 to disappear
Feb 10 14:45:21.932: INFO: Pod pod-projected-secrets-85d13939-9825-4803-b6f2-353cea6962d8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:45:21.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3094" for this suite.
Feb 10 14:45:28.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:45:28.344: INFO: namespace projected-3094 deletion completed in 6.243453011s

• [SLOW TEST:26.304 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:45:28.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 10 14:45:43.383: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:45:43.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-285" for this suite.
Feb 10 14:45:49.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:45:49.799: INFO: namespace container-runtime-285 deletion completed in 6.339810997s

• [SLOW TEST:21.455 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:45:49.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-79b7b415-fa6b-43f9-8780-11c1e50d6a35
STEP: Creating a pod to test consume secrets
Feb 10 14:45:50.064: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1" in namespace "projected-2744" to be "success or failure"
Feb 10 14:45:50.081: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.60316ms
Feb 10 14:45:52.088: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023943312s
Feb 10 14:45:54.098: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0337174s
Feb 10 14:45:56.110: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046322241s
Feb 10 14:45:58.120: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056205428s
Feb 10 14:46:00.129: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065117306s
Feb 10 14:46:02.139: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.075376151s
Feb 10 14:46:04.145: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081374876s
Feb 10 14:46:06.157: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.093003388s
STEP: Saw pod success
Feb 10 14:46:06.157: INFO: Pod "pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1" satisfied condition "success or failure"
Feb 10 14:46:06.162: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1 container secret-volume-test: 
STEP: delete the pod
Feb 10 14:46:06.381: INFO: Waiting for pod pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1 to disappear
Feb 10 14:46:06.390: INFO: Pod pod-projected-secrets-441a134a-09ee-4424-96e4-7b25fe54baa1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:46:06.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2744" for this suite.
Feb 10 14:46:14.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:46:14.746: INFO: namespace projected-2744 deletion completed in 8.319224666s

• [SLOW TEST:24.945 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:46:14.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-85cfd077-5e2b-4530-8041-4c614a466405
STEP: Creating a pod to test consume secrets
Feb 10 14:46:15.172: INFO: Waiting up to 5m0s for pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577" in namespace "secrets-2617" to be "success or failure"
Feb 10 14:46:15.350: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 178.243719ms
Feb 10 14:46:17.361: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189133143s
Feb 10 14:46:19.925: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 4.752870493s
Feb 10 14:46:21.933: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761016237s
Feb 10 14:46:23.948: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776320291s
Feb 10 14:46:25.967: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 10.794676514s
Feb 10 14:46:27.982: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 12.810454494s
Feb 10 14:46:29.999: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Pending", Reason="", readiness=false. Elapsed: 14.827058787s
Feb 10 14:46:32.012: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.839699083s
STEP: Saw pod success
Feb 10 14:46:32.012: INFO: Pod "pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577" satisfied condition "success or failure"
Feb 10 14:46:32.019: INFO: Trying to get logs from node iruya-node pod pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577 container secret-volume-test: 
STEP: delete the pod
Feb 10 14:46:32.249: INFO: Waiting for pod pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577 to disappear
Feb 10 14:46:32.265: INFO: Pod pod-secrets-7650f69e-c300-4a35-a0a0-e180387d7577 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:46:32.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2617" for this suite.
Feb 10 14:46:38.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:46:38.523: INFO: namespace secrets-2617 deletion completed in 6.248520438s
STEP: Destroying namespace "secret-namespace-2904" for this suite.
Feb 10 14:46:44.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:46:44.679: INFO: namespace secret-namespace-2904 deletion completed in 6.156119666s

• [SLOW TEST:29.932 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:46:44.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-grtc
STEP: Creating a pod to test atomic-volume-subpath
Feb 10 14:46:44.921: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-grtc" in namespace "subpath-5250" to be "success or failure"
Feb 10 14:46:44.974: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 53.169107ms
Feb 10 14:46:47.006: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084906467s
Feb 10 14:46:49.016: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094825513s
Feb 10 14:46:52.237: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.316184372s
Feb 10 14:46:54.244: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.322813832s
Feb 10 14:46:56.572: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.65092944s
Feb 10 14:46:58.600: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.678290294s
Feb 10 14:47:00.615: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.693930594s
Feb 10 14:47:02.748: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 17.826746671s
Feb 10 14:47:04.811: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 19.889640808s
Feb 10 14:47:07.004: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 22.082871716s
Feb 10 14:47:09.016: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 24.094812455s
Feb 10 14:47:11.027: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 26.106065083s
Feb 10 14:47:13.037: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 28.11542042s
Feb 10 14:47:15.048: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 30.126885151s
Feb 10 14:47:17.057: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 32.13574567s
Feb 10 14:47:19.070: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 34.148742916s
Feb 10 14:47:21.087: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 36.165443879s
Feb 10 14:47:24.311: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Running", Reason="", readiness=true. Elapsed: 39.389676916s
Feb 10 14:47:26.321: INFO: Pod "pod-subpath-test-downwardapi-grtc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.399256496s
STEP: Saw pod success
Feb 10 14:47:26.321: INFO: Pod "pod-subpath-test-downwardapi-grtc" satisfied condition "success or failure"
Feb 10 14:47:26.328: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-grtc container test-container-subpath-downwardapi-grtc: 
STEP: delete the pod
Feb 10 14:47:26.977: INFO: Waiting for pod pod-subpath-test-downwardapi-grtc to disappear
Feb 10 14:47:26.988: INFO: Pod pod-subpath-test-downwardapi-grtc no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-grtc
Feb 10 14:47:26.988: INFO: Deleting pod "pod-subpath-test-downwardapi-grtc" in namespace "subpath-5250"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:47:26.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5250" for this suite.
Feb 10 14:47:33.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:47:33.109: INFO: namespace subpath-5250 deletion completed in 6.113876018s

• [SLOW TEST:48.429 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:47:33.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 10 14:47:33.353: INFO: Waiting up to 5m0s for pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc" in namespace "containers-8154" to be "success or failure"
Feb 10 14:47:33.479: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 125.12841ms
Feb 10 14:47:35.487: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133647126s
Feb 10 14:47:37.498: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144559303s
Feb 10 14:47:40.249: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.8958404s
Feb 10 14:47:42.260: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.90673547s
Feb 10 14:47:44.269: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.915115453s
Feb 10 14:47:46.287: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.933324919s
Feb 10 14:47:48.297: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.943615282s
Feb 10 14:47:50.311: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.957646577s
STEP: Saw pod success
Feb 10 14:47:50.311: INFO: Pod "client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc" satisfied condition "success or failure"
Feb 10 14:47:50.315: INFO: Trying to get logs from node iruya-node pod client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc container test-container: 
STEP: delete the pod
Feb 10 14:47:50.489: INFO: Waiting for pod client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc to disappear
Feb 10 14:47:50.504: INFO: Pod client-containers-34e0d264-ff27-4ccd-9d4d-d06b7d8fbabc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:47:50.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8154" for this suite.
Feb 10 14:47:58.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:47:58.805: INFO: namespace containers-8154 deletion completed in 8.292732815s

• [SLOW TEST:25.695 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:47:58.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 10 14:48:02.264: INFO: Pod name wrapped-volume-race-5e818996-5795-4c80-9a86-62f614d0f46b: Found 0 pods out of 5
Feb 10 14:48:07.313: INFO: Pod name wrapped-volume-race-5e818996-5795-4c80-9a86-62f614d0f46b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5e818996-5795-4c80-9a86-62f614d0f46b in namespace emptydir-wrapper-5170, will wait for the garbage collector to delete the pods
Feb 10 14:49:11.644: INFO: Deleting ReplicationController wrapped-volume-race-5e818996-5795-4c80-9a86-62f614d0f46b took: 24.6006ms
Feb 10 14:49:12.146: INFO: Terminating ReplicationController wrapped-volume-race-5e818996-5795-4c80-9a86-62f614d0f46b pods took: 501.910805ms
STEP: Creating RC which spawns configmap-volume pods
Feb 10 14:50:08.438: INFO: Pod name wrapped-volume-race-f4c64b0f-2f10-46d2-bcae-0b2f94b9eee6: Found 0 pods out of 5
Feb 10 14:50:13.658: INFO: Pod name wrapped-volume-race-f4c64b0f-2f10-46d2-bcae-0b2f94b9eee6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f4c64b0f-2f10-46d2-bcae-0b2f94b9eee6 in namespace emptydir-wrapper-5170, will wait for the garbage collector to delete the pods
Feb 10 14:51:09.904: INFO: Deleting ReplicationController wrapped-volume-race-f4c64b0f-2f10-46d2-bcae-0b2f94b9eee6 took: 19.460353ms
Feb 10 14:51:10.404: INFO: Terminating ReplicationController wrapped-volume-race-f4c64b0f-2f10-46d2-bcae-0b2f94b9eee6 pods took: 500.70524ms
STEP: Creating RC which spawns configmap-volume pods
Feb 10 14:52:07.837: INFO: Pod name wrapped-volume-race-f436562c-802a-40fb-bf9f-9d99fa00b99c: Found 0 pods out of 5
Feb 10 14:52:12.857: INFO: Pod name wrapped-volume-race-f436562c-802a-40fb-bf9f-9d99fa00b99c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f436562c-802a-40fb-bf9f-9d99fa00b99c in namespace emptydir-wrapper-5170, will wait for the garbage collector to delete the pods
Feb 10 14:53:15.024: INFO: Deleting ReplicationController wrapped-volume-race-f436562c-802a-40fb-bf9f-9d99fa00b99c took: 21.794948ms
Feb 10 14:53:15.525: INFO: Terminating ReplicationController wrapped-volume-race-f436562c-802a-40fb-bf9f-9d99fa00b99c pods took: 500.706869ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:54:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5170" for this suite.
Feb 10 14:54:22.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:54:22.903: INFO: namespace emptydir-wrapper-5170 deletion completed in 14.178145934s

• [SLOW TEST:384.099 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:54:22.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 10 14:54:54.438: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a1fa5cfd-5d14-43b8-ac85-76a1d290a064"
Feb 10 14:54:54.438: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a1fa5cfd-5d14-43b8-ac85-76a1d290a064" in namespace "pods-8138" to be "terminated due to deadline exceeded"
Feb 10 14:54:54.445: INFO: Pod "pod-update-activedeadlineseconds-a1fa5cfd-5d14-43b8-ac85-76a1d290a064": Phase="Running", Reason="", readiness=true. Elapsed: 6.948743ms
Feb 10 14:54:56.457: INFO: Pod "pod-update-activedeadlineseconds-a1fa5cfd-5d14-43b8-ac85-76a1d290a064": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018928154s
Feb 10 14:54:56.457: INFO: Pod "pod-update-activedeadlineseconds-a1fa5cfd-5d14-43b8-ac85-76a1d290a064" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:54:56.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8138" for this suite.
Feb 10 14:55:04.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:55:04.660: INFO: namespace pods-8138 deletion completed in 8.196046001s

• [SLOW TEST:41.756 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:55:04.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9845
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 10 14:55:04.836: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 10 14:55:58.289: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9845 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 14:55:58.289: INFO: >>> kubeConfig: /root/.kube/config
I0210 14:55:58.380241       8 log.go:172] (0xc000d20840) (0xc002305180) Create stream
I0210 14:55:58.380342       8 log.go:172] (0xc000d20840) (0xc002305180) Stream added, broadcasting: 1
I0210 14:55:58.391733       8 log.go:172] (0xc000d20840) Reply frame received for 1
I0210 14:55:58.391782       8 log.go:172] (0xc000d20840) (0xc0013c1680) Create stream
I0210 14:55:58.391795       8 log.go:172] (0xc000d20840) (0xc0013c1680) Stream added, broadcasting: 3
I0210 14:55:58.393855       8 log.go:172] (0xc000d20840) Reply frame received for 3
I0210 14:55:58.393886       8 log.go:172] (0xc000d20840) (0xc0018b03c0) Create stream
I0210 14:55:58.393897       8 log.go:172] (0xc000d20840) (0xc0018b03c0) Stream added, broadcasting: 5
I0210 14:55:58.401061       8 log.go:172] (0xc000d20840) Reply frame received for 5
I0210 14:55:59.658230       8 log.go:172] (0xc000d20840) Data frame received for 3
I0210 14:55:59.658304       8 log.go:172] (0xc0013c1680) (3) Data frame handling
I0210 14:55:59.658390       8 log.go:172] (0xc0013c1680) (3) Data frame sent
I0210 14:55:59.826759       8 log.go:172] (0xc000d20840) Data frame received for 1
I0210 14:55:59.826867       8 log.go:172] (0xc002305180) (1) Data frame handling
I0210 14:55:59.826896       8 log.go:172] (0xc002305180) (1) Data frame sent
I0210 14:55:59.826923       8 log.go:172] (0xc000d20840) (0xc002305180) Stream removed, broadcasting: 1
I0210 14:55:59.827634       8 log.go:172] (0xc000d20840) (0xc0013c1680) Stream removed, broadcasting: 3
I0210 14:55:59.827814       8 log.go:172] (0xc000d20840) (0xc0018b03c0) Stream removed, broadcasting: 5
I0210 14:55:59.827951       8 log.go:172] (0xc000d20840) (0xc002305180) Stream removed, broadcasting: 1
I0210 14:55:59.828017       8 log.go:172] (0xc000d20840) (0xc0013c1680) Stream removed, broadcasting: 3
I0210 14:55:59.828066       8 log.go:172] (0xc000d20840) (0xc0018b03c0) Stream removed, broadcasting: 5
Feb 10 14:55:59.828: INFO: Found all expected endpoints: [netserver-0]
I0210 14:55:59.828559       8 log.go:172] (0xc000d20840) Go away received
Feb 10 14:55:59.839: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9845 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 10 14:55:59.839: INFO: >>> kubeConfig: /root/.kube/config
I0210 14:55:59.928884       8 log.go:172] (0xc000e002c0) (0xc001091d60) Create stream
I0210 14:55:59.928966       8 log.go:172] (0xc000e002c0) (0xc001091d60) Stream added, broadcasting: 1
I0210 14:55:59.936883       8 log.go:172] (0xc000e002c0) Reply frame received for 1
I0210 14:55:59.936995       8 log.go:172] (0xc000e002c0) (0xc001091f40) Create stream
I0210 14:55:59.937011       8 log.go:172] (0xc000e002c0) (0xc001091f40) Stream added, broadcasting: 3
I0210 14:55:59.940520       8 log.go:172] (0xc000e002c0) Reply frame received for 3
I0210 14:55:59.940559       8 log.go:172] (0xc000e002c0) (0xc0014b2280) Create stream
I0210 14:55:59.940574       8 log.go:172] (0xc000e002c0) (0xc0014b2280) Stream added, broadcasting: 5
I0210 14:55:59.941962       8 log.go:172] (0xc000e002c0) Reply frame received for 5
I0210 14:56:01.114067       8 log.go:172] (0xc000e002c0) Data frame received for 3
I0210 14:56:01.114130       8 log.go:172] (0xc001091f40) (3) Data frame handling
I0210 14:56:01.114168       8 log.go:172] (0xc001091f40) (3) Data frame sent
I0210 14:56:01.293852       8 log.go:172] (0xc000e002c0) Data frame received for 1
I0210 14:56:01.294048       8 log.go:172] (0xc000e002c0) (0xc001091f40) Stream removed, broadcasting: 3
I0210 14:56:01.294144       8 log.go:172] (0xc001091d60) (1) Data frame handling
I0210 14:56:01.294172       8 log.go:172] (0xc001091d60) (1) Data frame sent
I0210 14:56:01.294187       8 log.go:172] (0xc000e002c0) (0xc001091d60) Stream removed, broadcasting: 1
I0210 14:56:01.294541       8 log.go:172] (0xc000e002c0) (0xc0014b2280) Stream removed, broadcasting: 5
I0210 14:56:01.294731       8 log.go:172] (0xc000e002c0) (0xc001091d60) Stream removed, broadcasting: 1
I0210 14:56:01.294743       8 log.go:172] (0xc000e002c0) (0xc001091f40) Stream removed, broadcasting: 3
I0210 14:56:01.294757       8 log.go:172] (0xc000e002c0) (0xc0014b2280) Stream removed, broadcasting: 5
I0210 14:56:01.295433       8 log.go:172] (0xc000e002c0) Go away received
Feb 10 14:56:01.295: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 14:56:01.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9845" for this suite.
Feb 10 14:56:29.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 14:56:29.570: INFO: namespace pod-network-test-9845 deletion completed in 28.264007597s

• [SLOW TEST:84.910 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 14:56:29.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-096a2b62-1cdb-45db-b218-4bfbedb83591 in namespace container-probe-1989
Feb 10 14:56:51.764: INFO: Started pod test-webserver-096a2b62-1cdb-45db-b218-4bfbedb83591 in namespace container-probe-1989
STEP: checking the pod's current state and verifying that restartCount is present
Feb 10 14:56:51.768: INFO: Initial restart count of pod test-webserver-096a2b62-1cdb-45db-b218-4bfbedb83591 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:00:52.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1989" for this suite.
Feb 10 15:00:58.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:00:59.093: INFO: namespace container-probe-1989 deletion completed in 6.250517707s

• [SLOW TEST:269.523 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:00:59.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:01:04.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4632" for this suite.
Feb 10 15:01:10.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:01:10.869: INFO: namespace watch-4632 deletion completed in 6.214209938s

• [SLOW TEST:11.775 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:01:10.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7ec25c28-598b-4fc0-995e-5bc5831b8b5e
STEP: Creating a pod to test consume configMaps
Feb 10 15:01:11.926: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f" in namespace "configmap-4893" to be "success or failure"
Feb 10 15:01:11.936: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.418168ms
Feb 10 15:01:13.946: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019651725s
Feb 10 15:01:15.952: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026120004s
Feb 10 15:01:17.993: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067232026s
Feb 10 15:01:20.010: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083453825s
Feb 10 15:01:22.030: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104312859s
STEP: Saw pod success
Feb 10 15:01:22.031: INFO: Pod "pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f" satisfied condition "success or failure"
Feb 10 15:01:22.040: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f container configmap-volume-test: 
STEP: delete the pod
Feb 10 15:01:22.297: INFO: Waiting for pod pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f to disappear
Feb 10 15:01:22.311: INFO: Pod pod-configmaps-d9a2120a-9d63-4ac8-8ee5-04b64b045f0f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:01:22.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4893" for this suite.
Feb 10 15:01:28.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:01:28.498: INFO: namespace configmap-4893 deletion completed in 6.177349826s

• [SLOW TEST:17.629 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:01:28.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:01:28.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2220" for this suite.
Feb 10 15:01:34.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:01:34.774: INFO: namespace services-2220 deletion completed in 6.186171574s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.275 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:01:34.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-07f747b5-5313-49b0-99a6-da96c35bca7b
STEP: Creating a pod to test consume secrets
Feb 10 15:01:34.952: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53" in namespace "projected-4733" to be "success or failure"
Feb 10 15:01:35.016: INFO: Pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53": Phase="Pending", Reason="", readiness=false. Elapsed: 64.039428ms
Feb 10 15:01:37.027: INFO: Pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0754573s
Feb 10 15:01:39.062: INFO: Pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109889344s
Feb 10 15:01:41.069: INFO: Pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117141954s
Feb 10 15:01:43.086: INFO: Pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133771802s
STEP: Saw pod success
Feb 10 15:01:43.086: INFO: Pod "pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53" satisfied condition "success or failure"
Feb 10 15:01:43.100: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53 container projected-secret-volume-test: 
STEP: delete the pod
Feb 10 15:01:43.186: INFO: Waiting for pod pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53 to disappear
Feb 10 15:01:43.199: INFO: Pod pod-projected-secrets-850d97fe-fca1-42a2-92fe-b06b44786c53 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:01:43.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4733" for this suite.
Feb 10 15:01:49.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:01:49.353: INFO: namespace projected-4733 deletion completed in 6.148156327s

• [SLOW TEST:14.578 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:01:49.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-846
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 10 15:01:49.476: INFO: Found 0 stateful pods, waiting for 3
Feb 10 15:01:59.489: INFO: Found 2 stateful pods, waiting for 3
Feb 10 15:02:09.485: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:02:09.485: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:02:09.485: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 10 15:02:19.486: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:02:19.486: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:02:19.486: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 10 15:02:19.526: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 10 15:02:29.631: INFO: Updating stateful set ss2
Feb 10 15:02:29.668: INFO: Waiting for Pod statefulset-846/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 10 15:02:40.123: INFO: Found 2 stateful pods, waiting for 3
Feb 10 15:02:50.147: INFO: Found 2 stateful pods, waiting for 3
Feb 10 15:03:00.137: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:03:00.137: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:03:00.137: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 10 15:03:00.164: INFO: Updating stateful set ss2
Feb 10 15:03:00.265: INFO: Waiting for Pod statefulset-846/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:03:10.279: INFO: Waiting for Pod statefulset-846/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:03:20.305: INFO: Updating stateful set ss2
Feb 10 15:03:20.325: INFO: Waiting for StatefulSet statefulset-846/ss2 to complete update
Feb 10 15:03:20.325: INFO: Waiting for Pod statefulset-846/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:03:30.343: INFO: Waiting for StatefulSet statefulset-846/ss2 to complete update
Feb 10 15:03:30.343: INFO: Waiting for Pod statefulset-846/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:03:40.345: INFO: Waiting for StatefulSet statefulset-846/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 10 15:03:50.338: INFO: Deleting all statefulset in ns statefulset-846
Feb 10 15:03:50.342: INFO: Scaling statefulset ss2 to 0
Feb 10 15:04:30.374: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 15:04:30.379: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:04:30.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-846" for this suite.
Feb 10 15:04:38.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:04:38.605: INFO: namespace statefulset-846 deletion completed in 8.17923955s

• [SLOW TEST:169.252 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:04:38.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 10 15:04:38.731: INFO: Waiting up to 5m0s for pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c" in namespace "containers-3667" to be "success or failure"
Feb 10 15:04:38.741: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.323606ms
Feb 10 15:04:40.753: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021905843s
Feb 10 15:04:42.764: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033741532s
Feb 10 15:04:44.780: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049423378s
Feb 10 15:04:46.790: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059755384s
Feb 10 15:04:48.800: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068938112s
STEP: Saw pod success
Feb 10 15:04:48.800: INFO: Pod "client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c" satisfied condition "success or failure"
Feb 10 15:04:48.803: INFO: Trying to get logs from node iruya-node pod client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c container test-container: 
STEP: delete the pod
Feb 10 15:04:48.965: INFO: Waiting for pod client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c to disappear
Feb 10 15:04:48.977: INFO: Pod client-containers-6fa21542-9c7a-4e9e-aa1b-1a7797c8e33c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:04:48.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3667" for this suite.
Feb 10 15:04:55.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:04:55.183: INFO: namespace containers-3667 deletion completed in 6.202465223s

• [SLOW TEST:16.578 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:04:55.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7621/configmap-test-2787594a-5c57-42ae-ae95-0332be4abbd8
STEP: Creating a pod to test consume configMaps
Feb 10 15:04:55.446: INFO: Waiting up to 5m0s for pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8" in namespace "configmap-7621" to be "success or failure"
Feb 10 15:04:55.462: INFO: Pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.904652ms
Feb 10 15:04:57.469: INFO: Pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022673538s
Feb 10 15:04:59.562: INFO: Pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115755783s
Feb 10 15:05:01.576: INFO: Pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129674434s
Feb 10 15:05:03.582: INFO: Pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.135700741s
STEP: Saw pod success
Feb 10 15:05:03.582: INFO: Pod "pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8" satisfied condition "success or failure"
Feb 10 15:05:03.585: INFO: Trying to get logs from node iruya-node pod pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8 container env-test: 
STEP: delete the pod
Feb 10 15:05:03.743: INFO: Waiting for pod pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8 to disappear
Feb 10 15:05:03.756: INFO: Pod pod-configmaps-18b70179-f467-4c12-b344-1001819d64c8 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:05:03.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7621" for this suite.
Feb 10 15:05:09.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:05:09.933: INFO: namespace configmap-7621 deletion completed in 6.171205608s

• [SLOW TEST:14.748 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:05:09.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 10 15:05:09.982: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:05:23.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5940" for this suite.
Feb 10 15:05:29.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:05:29.256: INFO: namespace init-container-5940 deletion completed in 6.142960603s

• [SLOW TEST:19.322 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:05:29.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 15:05:29.328: INFO: Creating ReplicaSet my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6
Feb 10 15:05:29.338: INFO: Pod name my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6: Found 0 pods out of 1
Feb 10 15:05:34.354: INFO: Pod name my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6: Found 1 pods out of 1
Feb 10 15:05:34.354: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6" is running
Feb 10 15:05:38.367: INFO: Pod "my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6-2ws52" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 15:05:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 15:05:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 15:05:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-10 15:05:29 +0000 UTC Reason: Message:}])
Feb 10 15:05:38.367: INFO: Trying to dial the pod
Feb 10 15:05:43.715: INFO: Controller my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6: Got expected result from replica 1 [my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6-2ws52]: "my-hostname-basic-b0a2a1bd-e82e-46a9-a332-1572fb0544c6-2ws52", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:05:43.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6914" for this suite.
Feb 10 15:05:49.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:05:49.928: INFO: namespace replicaset-6914 deletion completed in 6.203763733s

• [SLOW TEST:20.672 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:05:49.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-26fabcfe-2aed-4bda-8d76-84af19f90ea6
STEP: Creating secret with name s-test-opt-upd-6c56a634-bb62-435b-93d1-fb9a58961b2c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-26fabcfe-2aed-4bda-8d76-84af19f90ea6
STEP: Updating secret s-test-opt-upd-6c56a634-bb62-435b-93d1-fb9a58961b2c
STEP: Creating secret with name s-test-opt-create-b143e4e2-d662-4648-9eb2-a4decefe1272
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:06:04.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4460" for this suite.
Feb 10 15:06:44.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:06:44.525: INFO: namespace secrets-4460 deletion completed in 40.157185341s

• [SLOW TEST:54.597 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:06:44.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 15:06:44.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9998'
Feb 10 15:06:46.928: INFO: stderr: ""
Feb 10 15:06:46.928: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 10 15:06:47.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9998'
Feb 10 15:06:49.233: INFO: stderr: ""
Feb 10 15:06:49.233: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:06:49.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9998" for this suite.
Feb 10 15:06:55.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:06:55.395: INFO: namespace kubectl-9998 deletion completed in 6.153298764s

• [SLOW TEST:10.869 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:06:55.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 10 15:06:55.510: INFO: Waiting up to 5m0s for pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4" in namespace "var-expansion-5105" to be "success or failure"
Feb 10 15:06:55.521: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900772ms
Feb 10 15:06:57.529: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019011331s
Feb 10 15:06:59.537: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026924329s
Feb 10 15:07:01.546: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036085492s
Feb 10 15:07:03.553: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04330833s
Feb 10 15:07:05.561: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051181107s
STEP: Saw pod success
Feb 10 15:07:05.561: INFO: Pod "var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4" satisfied condition "success or failure"
Feb 10 15:07:05.572: INFO: Trying to get logs from node iruya-node pod var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4 container dapi-container: 
STEP: delete the pod
Feb 10 15:07:05.634: INFO: Waiting for pod var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4 to disappear
Feb 10 15:07:05.640: INFO: Pod var-expansion-2453b6ef-8814-4afd-aa2f-4a382a42bdc4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:07:05.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5105" for this suite.
Feb 10 15:07:11.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:07:11.977: INFO: namespace var-expansion-5105 deletion completed in 6.26291244s

• [SLOW TEST:16.582 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:07:11.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 10 15:07:12.134: INFO: Waiting up to 5m0s for pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3" in namespace "emptydir-5513" to be "success or failure"
Feb 10 15:07:12.156: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.335406ms
Feb 10 15:07:14.169: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034954205s
Feb 10 15:07:16.175: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040888551s
Feb 10 15:07:18.182: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048427599s
Feb 10 15:07:20.192: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058284028s
Feb 10 15:07:22.198: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064302001s
STEP: Saw pod success
Feb 10 15:07:22.198: INFO: Pod "pod-09839dd8-26bd-453b-8959-318569fb6cb3" satisfied condition "success or failure"
Feb 10 15:07:22.201: INFO: Trying to get logs from node iruya-node pod pod-09839dd8-26bd-453b-8959-318569fb6cb3 container test-container: 
STEP: delete the pod
Feb 10 15:07:22.282: INFO: Waiting for pod pod-09839dd8-26bd-453b-8959-318569fb6cb3 to disappear
Feb 10 15:07:22.293: INFO: Pod pod-09839dd8-26bd-453b-8959-318569fb6cb3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:07:22.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5513" for this suite.
Feb 10 15:07:28.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:07:28.475: INFO: namespace emptydir-5513 deletion completed in 6.174535165s

• [SLOW TEST:16.497 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:07:28.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0210 15:07:38.747102       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 10 15:07:38.747: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:07:38.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5525" for this suite.
Feb 10 15:07:44.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:07:44.873: INFO: namespace gc-5525 deletion completed in 6.120030838s

• [SLOW TEST:16.399 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:07:44.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 10 15:07:45.023: INFO: Waiting up to 5m0s for pod "pod-51b36af0-e048-4b68-b952-7f455ac96349" in namespace "emptydir-4084" to be "success or failure"
Feb 10 15:07:45.053: INFO: Pod "pod-51b36af0-e048-4b68-b952-7f455ac96349": Phase="Pending", Reason="", readiness=false. Elapsed: 29.316396ms
Feb 10 15:07:47.061: INFO: Pod "pod-51b36af0-e048-4b68-b952-7f455ac96349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037311629s
Feb 10 15:07:49.069: INFO: Pod "pod-51b36af0-e048-4b68-b952-7f455ac96349": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045905706s
Feb 10 15:07:51.075: INFO: Pod "pod-51b36af0-e048-4b68-b952-7f455ac96349": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051222305s
Feb 10 15:07:53.080: INFO: Pod "pod-51b36af0-e048-4b68-b952-7f455ac96349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056790645s
STEP: Saw pod success
Feb 10 15:07:53.080: INFO: Pod "pod-51b36af0-e048-4b68-b952-7f455ac96349" satisfied condition "success or failure"
Feb 10 15:07:53.082: INFO: Trying to get logs from node iruya-node pod pod-51b36af0-e048-4b68-b952-7f455ac96349 container test-container: 
STEP: delete the pod
Feb 10 15:07:53.140: INFO: Waiting for pod pod-51b36af0-e048-4b68-b952-7f455ac96349 to disappear
Feb 10 15:07:53.149: INFO: Pod pod-51b36af0-e048-4b68-b952-7f455ac96349 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:07:53.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4084" for this suite.
Feb 10 15:07:59.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:07:59.314: INFO: namespace emptydir-4084 deletion completed in 6.160227062s

• [SLOW TEST:14.440 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:07:59.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:08:09.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8136" for this suite.
Feb 10 15:09:01.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:09:01.683: INFO: namespace kubelet-test-8136 deletion completed in 52.1903134s

• [SLOW TEST:62.367 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:09:01.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 10 15:09:01.882: INFO: Waiting up to 5m0s for pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6" in namespace "var-expansion-4859" to be "success or failure"
Feb 10 15:09:01.921: INFO: Pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.555426ms
Feb 10 15:09:03.935: INFO: Pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053308913s
Feb 10 15:09:05.946: INFO: Pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064077904s
Feb 10 15:09:07.956: INFO: Pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073726412s
Feb 10 15:09:09.969: INFO: Pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087564206s
STEP: Saw pod success
Feb 10 15:09:09.969: INFO: Pod "var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6" satisfied condition "success or failure"
Feb 10 15:09:09.974: INFO: Trying to get logs from node iruya-node pod var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6 container dapi-container: 
STEP: delete the pod
Feb 10 15:09:10.120: INFO: Waiting for pod var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6 to disappear
Feb 10 15:09:10.129: INFO: Pod var-expansion-931857f5-f4e4-494c-9bd6-b52bc2b9fca6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:09:10.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4859" for this suite.
Feb 10 15:09:18.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:09:18.290: INFO: namespace var-expansion-4859 deletion completed in 8.152763381s

• [SLOW TEST:16.607 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:09:18.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 10 15:09:18.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4391'
Feb 10 15:09:18.585: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 10 15:09:18.586: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 10 15:09:18.603: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 10 15:09:18.630: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 10 15:09:18.646: INFO: scanned /root for discovery docs: 
Feb 10 15:09:18.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4391'
Feb 10 15:09:40.934: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 10 15:09:40.934: INFO: stdout: "Created e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e\nScaling up e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 10 15:09:40.934: INFO: stdout: "Created e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e\nScaling up e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 10 15:09:40.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4391'
Feb 10 15:09:41.106: INFO: stderr: ""
Feb 10 15:09:41.106: INFO: stdout: "e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e-b6nvb e2e-test-nginx-rc-cfc44 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 10 15:09:46.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4391'
Feb 10 15:09:46.277: INFO: stderr: ""
Feb 10 15:09:46.277: INFO: stdout: "e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e-b6nvb e2e-test-nginx-rc-cfc44 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 10 15:09:51.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4391'
Feb 10 15:09:51.431: INFO: stderr: ""
Feb 10 15:09:51.431: INFO: stdout: "e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e-b6nvb "
Feb 10 15:09:51.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e-b6nvb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4391'
Feb 10 15:09:51.632: INFO: stderr: ""
Feb 10 15:09:51.632: INFO: stdout: "true"
Feb 10 15:09:51.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e-b6nvb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4391'
Feb 10 15:09:51.720: INFO: stderr: ""
Feb 10 15:09:51.720: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 10 15:09:51.720: INFO: e2e-test-nginx-rc-039445fe928dafcb3539958744377b0e-b6nvb is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 10 15:09:51.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4391'
Feb 10 15:09:51.893: INFO: stderr: ""
Feb 10 15:09:51.893: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:09:51.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4391" for this suite.
Feb 10 15:10:14.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:10:14.128: INFO: namespace kubectl-4391 deletion completed in 22.224620632s

• [SLOW TEST:55.836 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:10:14.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-2863
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2863 to expose endpoints map[]
Feb 10 15:10:14.383: INFO: Get endpoints failed (15.401516ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 10 15:10:15.391: INFO: successfully validated that service multi-endpoint-test in namespace services-2863 exposes endpoints map[] (1.023227483s elapsed)
STEP: Creating pod pod1 in namespace services-2863
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2863 to expose endpoints map[pod1:[100]]
Feb 10 15:10:19.612: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.205691659s elapsed, will retry)
Feb 10 15:10:25.193: INFO: successfully validated that service multi-endpoint-test in namespace services-2863 exposes endpoints map[pod1:[100]] (9.786723873s elapsed)
STEP: Creating pod pod2 in namespace services-2863
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2863 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 10 15:10:29.659: INFO: Unexpected endpoints: found map[f7ec9f23-7dd3-42d4-bd0a-71ffecb3c766:[100]], expected map[pod1:[100] pod2:[101]] (4.454342658s elapsed, will retry)
Feb 10 15:10:33.107: INFO: successfully validated that service multi-endpoint-test in namespace services-2863 exposes endpoints map[pod1:[100] pod2:[101]] (7.901727095s elapsed)
STEP: Deleting pod pod1 in namespace services-2863
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2863 to expose endpoints map[pod2:[101]]
Feb 10 15:10:34.216: INFO: successfully validated that service multi-endpoint-test in namespace services-2863 exposes endpoints map[pod2:[101]] (1.100469214s elapsed)
STEP: Deleting pod pod2 in namespace services-2863
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2863 to expose endpoints map[]
Feb 10 15:10:35.243: INFO: successfully validated that service multi-endpoint-test in namespace services-2863 exposes endpoints map[] (1.020661744s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:10:36.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2863" for this suite.
Feb 10 15:10:59.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:10:59.372: INFO: namespace services-2863 deletion completed in 22.391951076s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:45.244 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:10:59.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-9l5q
STEP: Creating a pod to test atomic-volume-subpath
Feb 10 15:10:59.508: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9l5q" in namespace "subpath-3059" to be "success or failure"
Feb 10 15:10:59.517: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75221ms
Feb 10 15:11:01.527: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01922579s
Feb 10 15:11:03.535: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027107721s
Feb 10 15:11:05.545: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037447577s
Feb 10 15:11:07.554: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045766442s
Feb 10 15:11:09.562: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 10.054619256s
Feb 10 15:11:11.578: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 12.070575255s
Feb 10 15:11:13.589: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 14.08068568s
Feb 10 15:11:15.598: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 16.090152275s
Feb 10 15:11:17.607: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 18.099032786s
Feb 10 15:11:19.618: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 20.109723658s
Feb 10 15:11:21.644: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 22.136166206s
Feb 10 15:11:23.655: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 24.147334072s
Feb 10 15:11:25.664: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 26.155714114s
Feb 10 15:11:27.670: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Running", Reason="", readiness=true. Elapsed: 28.162100675s
Feb 10 15:11:29.741: INFO: Pod "pod-subpath-test-secret-9l5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.233225678s
STEP: Saw pod success
Feb 10 15:11:29.741: INFO: Pod "pod-subpath-test-secret-9l5q" satisfied condition "success or failure"
Feb 10 15:11:29.749: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-9l5q container test-container-subpath-secret-9l5q: 
STEP: delete the pod
Feb 10 15:11:29.829: INFO: Waiting for pod pod-subpath-test-secret-9l5q to disappear
Feb 10 15:11:29.836: INFO: Pod pod-subpath-test-secret-9l5q no longer exists
STEP: Deleting pod pod-subpath-test-secret-9l5q
Feb 10 15:11:29.836: INFO: Deleting pod "pod-subpath-test-secret-9l5q" in namespace "subpath-3059"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:11:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3059" for this suite.
Feb 10 15:11:35.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:11:36.050: INFO: namespace subpath-3059 deletion completed in 6.154950122s

• [SLOW TEST:36.677 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:11:36.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 10 15:11:42.701: INFO: 0 pods remaining
Feb 10 15:11:42.701: INFO: 0 pods has nil DeletionTimestamp
Feb 10 15:11:42.701: INFO: 
STEP: Gathering metrics
W0210 15:11:43.324389       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 10 15:11:43.324: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:11:43.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2497" for this suite.
Feb 10 15:11:55.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:11:55.764: INFO: namespace gc-2497 deletion completed in 12.431550959s

• [SLOW TEST:19.712 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:11:55.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 10 15:11:55.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 10 15:11:56.130: INFO: stderr: ""
Feb 10 15:11:56.131: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:11:56.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6102" for this suite.
Feb 10 15:12:02.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:12:02.274: INFO: namespace kubectl-6102 deletion completed in 6.135204995s

• [SLOW TEST:6.510 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:12:02.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 10 15:12:02.372: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8249" to be "success or failure"
Feb 10 15:12:02.411: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 39.317926ms
Feb 10 15:12:04.422: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050042209s
Feb 10 15:12:06.430: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058552511s
Feb 10 15:12:08.442: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06986527s
Feb 10 15:12:10.449: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077603312s
Feb 10 15:12:12.462: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090530792s
Feb 10 15:12:14.474: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.102139657s
STEP: Saw pod success
Feb 10 15:12:14.474: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 10 15:12:14.479: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 10 15:12:14.543: INFO: Waiting for pod pod-host-path-test to disappear
Feb 10 15:12:14.552: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:12:14.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8249" for this suite.
Feb 10 15:12:20.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:12:20.683: INFO: namespace hostpath-8249 deletion completed in 6.123749723s

• [SLOW TEST:18.408 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:12:20.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 10 15:12:20.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8440'
Feb 10 15:12:21.431: INFO: stderr: ""
Feb 10 15:12:21.431: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 10 15:12:22.443: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:22.443: INFO: Found 0 / 1
Feb 10 15:12:23.440: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:23.440: INFO: Found 0 / 1
Feb 10 15:12:28.150: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:28.151: INFO: Found 0 / 1
Feb 10 15:12:28.441: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:28.441: INFO: Found 0 / 1
Feb 10 15:12:29.439: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:29.439: INFO: Found 0 / 1
Feb 10 15:12:30.438: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:30.438: INFO: Found 0 / 1
Feb 10 15:12:31.439: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:31.439: INFO: Found 0 / 1
Feb 10 15:12:32.438: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:32.438: INFO: Found 1 / 1
Feb 10 15:12:32.438: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 10 15:12:32.442: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:32.442: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 10 15:12:32.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jq6jn --namespace=kubectl-8440 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 10 15:12:32.607: INFO: stderr: ""
Feb 10 15:12:32.608: INFO: stdout: "pod/redis-master-jq6jn patched\n"
STEP: checking annotations
Feb 10 15:12:32.664: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:12:32.664: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:12:32.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8440" for this suite.
Feb 10 15:12:58.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:12:58.920: INFO: namespace kubectl-8440 deletion completed in 26.249269918s

• [SLOW TEST:38.237 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:12:58.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5361, will wait for the garbage collector to delete the pods
Feb 10 15:13:09.401: INFO: Deleting Job.batch foo took: 27.889055ms
Feb 10 15:13:09.702: INFO: Terminating Job.batch foo pods took: 300.500634ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:13:56.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5361" for this suite.
Feb 10 15:14:04.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:14:04.789: INFO: namespace job-5361 deletion completed in 8.16948181s

• [SLOW TEST:65.867 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:14:04.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0210 15:14:45.455817       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 10 15:14:45.455: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:14:45.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8278" for this suite.
Feb 10 15:15:05.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:15:05.626: INFO: namespace gc-8278 deletion completed in 20.164995603s

• [SLOW TEST:60.837 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:15:05.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 10 15:15:05.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9104'
Feb 10 15:15:06.383: INFO: stderr: ""
Feb 10 15:15:06.383: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 10 15:15:06.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9104'
Feb 10 15:15:06.996: INFO: stderr: ""
Feb 10 15:15:06.997: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 10 15:15:08.008: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:08.008: INFO: Found 0 / 1
Feb 10 15:15:09.012: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:09.012: INFO: Found 0 / 1
Feb 10 15:15:10.016: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:10.016: INFO: Found 0 / 1
Feb 10 15:15:11.003: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:11.003: INFO: Found 0 / 1
Feb 10 15:15:12.042: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:12.042: INFO: Found 0 / 1
Feb 10 15:15:13.008: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:13.008: INFO: Found 0 / 1
Feb 10 15:15:14.005: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:14.005: INFO: Found 0 / 1
Feb 10 15:15:15.004: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:15.004: INFO: Found 0 / 1
Feb 10 15:15:16.005: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:16.005: INFO: Found 1 / 1
Feb 10 15:15:16.005: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 10 15:15:16.008: INFO: Selector matched 1 pods for map[app:redis]
Feb 10 15:15:16.008: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 10 15:15:16.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gr6xq --namespace=kubectl-9104'
Feb 10 15:15:16.183: INFO: stderr: ""
Feb 10 15:15:16.183: INFO: stdout: "Name:           redis-master-gr6xq\nNamespace:      kubectl-9104\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Mon, 10 Feb 2020 15:15:06 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://ba2b5c473ba4a55cccd11cb848fd97c0355b1016e716042a0a3adf6b4f762ca9\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 10 Feb 2020 15:15:14 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6qbrb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-6qbrb:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-6qbrb\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  10s   default-scheduler    Successfully assigned kubectl-9104/redis-master-gr6xq to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Feb 10 15:15:16.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9104'
Feb 10 15:15:16.303: INFO: stderr: ""
Feb 10 15:15:16.303: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-9104\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-gr6xq\n"
Feb 10 15:15:16.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9104'
Feb 10 15:15:16.453: INFO: stderr: ""
Feb 10 15:15:16.453: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-9104\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.110.177.167\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 10 15:15:16.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 10 15:15:16.590: INFO: stderr: ""
Feb 10 15:15:16.590: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 10 Feb 2020 15:15:07 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 10 Feb 2020 15:15:07 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 10 Feb 2020 15:15:07 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 10 Feb 2020 15:15:07 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         190d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         121d\n  kubectl-9104               redis-master-gr6xq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 10 15:15:16.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9104'
Feb 10 15:15:16.669: INFO: stderr: ""
Feb 10 15:15:16.669: INFO: stdout: "Name:         kubectl-9104\nLabels:       e2e-framework=kubectl\n              e2e-run=e052abc8-8136-43af-a99d-65861881ef71\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:15:16.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9104" for this suite.
Feb 10 15:15:38.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:15:38.851: INFO: namespace kubectl-9104 deletion completed in 22.175489693s

• [SLOW TEST:33.224 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:15:38.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6508
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 10 15:15:39.035: INFO: Found 0 stateful pods, waiting for 3
Feb 10 15:15:49.050: INFO: Found 2 stateful pods, waiting for 3
Feb 10 15:15:59.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:15:59.042: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:15:59.042: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 10 15:16:09.050: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:16:09.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:16:09.050: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 10 15:16:09.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6508 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 15:16:09.449: INFO: stderr: "I0210 15:16:09.247921    3495 log.go:172] (0xc0009460b0) (0xc0009a06e0) Create stream\nI0210 15:16:09.248037    3495 log.go:172] (0xc0009460b0) (0xc0009a06e0) Stream added, broadcasting: 1\nI0210 15:16:09.253157    3495 log.go:172] (0xc0009460b0) Reply frame received for 1\nI0210 15:16:09.253199    3495 log.go:172] (0xc0009460b0) (0xc000574140) Create stream\nI0210 15:16:09.253215    3495 log.go:172] (0xc0009460b0) (0xc000574140) Stream added, broadcasting: 3\nI0210 15:16:09.255071    3495 log.go:172] (0xc0009460b0) Reply frame received for 3\nI0210 15:16:09.255136    3495 log.go:172] (0xc0009460b0) (0xc0006f8000) Create stream\nI0210 15:16:09.255153    3495 log.go:172] (0xc0009460b0) (0xc0006f8000) Stream added, broadcasting: 5\nI0210 15:16:09.256264    3495 log.go:172] (0xc0009460b0) Reply frame received for 5\nI0210 15:16:09.347736    3495 log.go:172] (0xc0009460b0) Data frame received for 5\nI0210 15:16:09.347772    3495 log.go:172] (0xc0006f8000) (5) Data frame handling\nI0210 15:16:09.347794    3495 log.go:172] (0xc0006f8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 15:16:09.376640    3495 log.go:172] (0xc0009460b0) Data frame received for 3\nI0210 15:16:09.376752    3495 log.go:172] (0xc000574140) (3) Data frame handling\nI0210 15:16:09.376770    3495 log.go:172] (0xc000574140) (3) Data frame sent\nI0210 15:16:09.443937    3495 log.go:172] (0xc0009460b0) Data frame received for 1\nI0210 15:16:09.443989    3495 log.go:172] (0xc0009a06e0) (1) Data frame handling\nI0210 15:16:09.444010    3495 log.go:172] (0xc0009a06e0) (1) Data frame sent\nI0210 15:16:09.444023    3495 log.go:172] (0xc0009460b0) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0210 15:16:09.444611    3495 log.go:172] (0xc0009460b0) (0xc000574140) Stream removed, broadcasting: 3\nI0210 15:16:09.444743    3495 log.go:172] (0xc0009460b0) (0xc0006f8000) Stream removed, broadcasting: 5\nI0210 15:16:09.444778    3495 log.go:172] (0xc0009460b0) Go away received\nI0210 15:16:09.445098    3495 log.go:172] (0xc0009460b0) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0210 15:16:09.445148    3495 log.go:172] (0xc0009460b0) (0xc000574140) Stream removed, broadcasting: 3\nI0210 15:16:09.445161    3495 log.go:172] (0xc0009460b0) (0xc0006f8000) Stream removed, broadcasting: 5\n"
Feb 10 15:16:09.450: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 15:16:09.450: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 10 15:16:19.546: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 10 15:16:29.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6508 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 15:16:30.250: INFO: stderr: "I0210 15:16:29.963904    3516 log.go:172] (0xc000116790) (0xc0008960a0) Create stream\nI0210 15:16:29.964078    3516 log.go:172] (0xc000116790) (0xc0008960a0) Stream added, broadcasting: 1\nI0210 15:16:29.969683    3516 log.go:172] (0xc000116790) Reply frame received for 1\nI0210 15:16:29.969755    3516 log.go:172] (0xc000116790) (0xc000586320) Create stream\nI0210 15:16:29.969777    3516 log.go:172] (0xc000116790) (0xc000586320) Stream added, broadcasting: 3\nI0210 15:16:29.971456    3516 log.go:172] (0xc000116790) Reply frame received for 3\nI0210 15:16:29.971498    3516 log.go:172] (0xc000116790) (0xc000322000) Create stream\nI0210 15:16:29.971508    3516 log.go:172] (0xc000116790) (0xc000322000) Stream added, broadcasting: 5\nI0210 15:16:29.972466    3516 log.go:172] (0xc000116790) Reply frame received for 5\nI0210 15:16:30.102249    3516 log.go:172] (0xc000116790) Data frame received for 5\nI0210 15:16:30.102351    3516 log.go:172] (0xc000322000) (5) Data frame handling\nI0210 15:16:30.102383    3516 log.go:172] (0xc000322000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 15:16:30.104819    3516 log.go:172] (0xc000116790) Data frame received for 3\nI0210 15:16:30.105067    3516 log.go:172] (0xc000586320) (3) Data frame handling\nI0210 15:16:30.105111    3516 log.go:172] (0xc000586320) (3) Data frame sent\nI0210 15:16:30.237773    3516 log.go:172] (0xc000116790) Data frame received for 1\nI0210 15:16:30.238011    3516 log.go:172] (0xc000116790) (0xc000586320) Stream removed, broadcasting: 3\nI0210 15:16:30.238080    3516 log.go:172] (0xc0008960a0) (1) Data frame handling\nI0210 15:16:30.238108    3516 log.go:172] (0xc0008960a0) (1) Data frame sent\nI0210 15:16:30.238256    3516 log.go:172] (0xc000116790) (0xc000322000) Stream removed, broadcasting: 5\nI0210 15:16:30.238428    3516 log.go:172] (0xc000116790) (0xc0008960a0) Stream removed, broadcasting: 1\nI0210 15:16:30.238527    3516 log.go:172] (0xc000116790) Go away received\nI0210 15:16:30.240149    3516 log.go:172] (0xc000116790) (0xc0008960a0) Stream removed, broadcasting: 1\nI0210 15:16:30.240229    3516 log.go:172] (0xc000116790) (0xc000586320) Stream removed, broadcasting: 3\nI0210 15:16:30.240236    3516 log.go:172] (0xc000116790) (0xc000322000) Stream removed, broadcasting: 5\n"
Feb 10 15:16:30.250: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 15:16:30.250: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 15:16:40.289: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:16:40.289: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:16:40.289: INFO: Waiting for Pod statefulset-6508/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:16:40.289: INFO: Waiting for Pod statefulset-6508/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:16:50.318: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:16:50.319: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:16:50.319: INFO: Waiting for Pod statefulset-6508/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:17:00.325: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:17:00.325: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:17:00.325: INFO: Waiting for Pod statefulset-6508/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:17:10.389: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:17:10.389: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:17:20.315: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:17:20.315: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 10 15:17:30.341: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 10 15:17:40.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6508 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 10 15:17:42.972: INFO: stderr: "I0210 15:17:42.684271    3531 log.go:172] (0xc00060e370) (0xc00060c0a0) Create stream\nI0210 15:17:42.684353    3531 log.go:172] (0xc00060e370) (0xc00060c0a0) Stream added, broadcasting: 1\nI0210 15:17:42.687516    3531 log.go:172] (0xc00060e370) Reply frame received for 1\nI0210 15:17:42.687562    3531 log.go:172] (0xc00060e370) (0xc0005c4280) Create stream\nI0210 15:17:42.687576    3531 log.go:172] (0xc00060e370) (0xc0005c4280) Stream added, broadcasting: 3\nI0210 15:17:42.688640    3531 log.go:172] (0xc00060e370) Reply frame received for 3\nI0210 15:17:42.688678    3531 log.go:172] (0xc00060e370) (0xc0002de000) Create stream\nI0210 15:17:42.688685    3531 log.go:172] (0xc00060e370) (0xc0002de000) Stream added, broadcasting: 5\nI0210 15:17:42.689483    3531 log.go:172] (0xc00060e370) Reply frame received for 5\nI0210 15:17:42.777936    3531 log.go:172] (0xc00060e370) Data frame received for 5\nI0210 15:17:42.778078    3531 log.go:172] (0xc0002de000) (5) Data frame handling\nI0210 15:17:42.778115    3531 log.go:172] (0xc0002de000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0210 15:17:42.837472    3531 log.go:172] (0xc00060e370) Data frame received for 3\nI0210 15:17:42.837747    3531 log.go:172] (0xc0005c4280) (3) Data frame handling\nI0210 15:17:42.837779    3531 log.go:172] (0xc0005c4280) (3) Data frame sent\nI0210 15:17:42.961677    3531 log.go:172] (0xc00060e370) (0xc0005c4280) Stream removed, broadcasting: 3\nI0210 15:17:42.961827    3531 log.go:172] (0xc00060e370) Data frame received for 1\nI0210 15:17:42.961863    3531 log.go:172] (0xc00060c0a0) (1) Data frame handling\nI0210 15:17:42.961905    3531 log.go:172] (0xc00060c0a0) (1) Data frame sent\nI0210 15:17:42.961960    3531 log.go:172] (0xc00060e370) (0xc00060c0a0) Stream removed, broadcasting: 1\nI0210 15:17:42.962144    3531 log.go:172] (0xc00060e370) (0xc0002de000) Stream removed, broadcasting: 5\nI0210 15:17:42.962410    3531 log.go:172] (0xc00060e370) (0xc00060c0a0) Stream removed, broadcasting: 1\nI0210 15:17:42.962440    3531 log.go:172] (0xc00060e370) (0xc0005c4280) Stream removed, broadcasting: 3\nI0210 15:17:42.962467    3531 log.go:172] (0xc00060e370) (0xc0002de000) Stream removed, broadcasting: 5\n"
Feb 10 15:17:42.972: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 10 15:17:42.972: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 10 15:17:53.030: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 10 15:18:03.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6508 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 10 15:18:03.569: INFO: stderr: "I0210 15:18:03.342616    3561 log.go:172] (0xc000116dc0) (0xc000424640) Create stream\nI0210 15:18:03.342742    3561 log.go:172] (0xc000116dc0) (0xc000424640) Stream added, broadcasting: 1\nI0210 15:18:03.356451    3561 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0210 15:18:03.356656    3561 log.go:172] (0xc000116dc0) (0xc0005f2280) Create stream\nI0210 15:18:03.356688    3561 log.go:172] (0xc000116dc0) (0xc0005f2280) Stream added, broadcasting: 3\nI0210 15:18:03.358671    3561 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0210 15:18:03.358725    3561 log.go:172] (0xc000116dc0) (0xc000898000) Create stream\nI0210 15:18:03.358752    3561 log.go:172] (0xc000116dc0) (0xc000898000) Stream added, broadcasting: 5\nI0210 15:18:03.360443    3561 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0210 15:18:03.498229    3561 log.go:172] (0xc000116dc0) Data frame received for 3\nI0210 15:18:03.498375    3561 log.go:172] (0xc0005f2280) (3) Data frame handling\nI0210 15:18:03.498393    3561 log.go:172] (0xc0005f2280) (3) Data frame sent\nI0210 15:18:03.498578    3561 log.go:172] (0xc000116dc0) Data frame received for 5\nI0210 15:18:03.498600    3561 log.go:172] (0xc000898000) (5) Data frame handling\nI0210 15:18:03.498621    3561 log.go:172] (0xc000898000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0210 15:18:03.564284    3561 log.go:172] (0xc000116dc0) (0xc0005f2280) Stream removed, broadcasting: 3\nI0210 15:18:03.564321    3561 log.go:172] (0xc000116dc0) Data frame received for 1\nI0210 15:18:03.564339    3561 log.go:172] (0xc000116dc0) (0xc000898000) Stream removed, broadcasting: 5\nI0210 15:18:03.564364    3561 log.go:172] (0xc000424640) (1) Data frame handling\nI0210 15:18:03.564381    3561 log.go:172] (0xc000424640) (1) Data frame sent\nI0210 15:18:03.564388    3561 log.go:172] (0xc000116dc0) (0xc000424640) Stream removed, broadcasting: 1\nI0210 15:18:03.564404    3561 log.go:172] (0xc000116dc0) Go away received\nI0210 15:18:03.564821    3561 log.go:172] (0xc000116dc0) (0xc000424640) Stream removed, broadcasting: 1\nI0210 15:18:03.564838    3561 log.go:172] (0xc000116dc0) (0xc0005f2280) Stream removed, broadcasting: 3\nI0210 15:18:03.564846    3561 log.go:172] (0xc000116dc0) (0xc000898000) Stream removed, broadcasting: 5\n"
Feb 10 15:18:03.569: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 10 15:18:03.569: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 10 15:18:13.652: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:18:13.652: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 10 15:18:13.652: INFO: Waiting for Pod statefulset-6508/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 10 15:18:23.722: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:18:23.722: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 10 15:18:23.722: INFO: Waiting for Pod statefulset-6508/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 10 15:18:33.718: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:18:33.718: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 10 15:18:43.667: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
Feb 10 15:18:43.667: INFO: Waiting for Pod statefulset-6508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 10 15:18:53.667: INFO: Waiting for StatefulSet statefulset-6508/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 10 15:19:03.670: INFO: Deleting all statefulset in ns statefulset-6508
Feb 10 15:19:03.674: INFO: Scaling statefulset ss2 to 0
Feb 10 15:19:34.294: INFO: Waiting for statefulset status.replicas updated to 0
Feb 10 15:19:34.299: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:19:34.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6508" for this suite.
Feb 10 15:19:42.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:19:42.488: INFO: namespace statefulset-6508 deletion completed in 8.14725756s

• [SLOW TEST:243.635 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:19:42.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-026c1796-1091-4868-ae27-1ee4a6a71afa
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-026c1796-1091-4868-ae27-1ee4a6a71afa
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:20:56.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3925" for this suite.
Feb 10 15:21:18.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:21:18.323: INFO: namespace configmap-3925 deletion completed in 22.211280314s

• [SLOW TEST:95.835 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 10 15:21:18.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-f857cbbc-3a00-4df9-930a-3af10f4b5293
STEP: Creating configMap with name cm-test-opt-upd-aa2aea7f-3163-439e-a27b-60ad297fb5e9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f857cbbc-3a00-4df9-930a-3af10f4b5293
STEP: Updating configmap cm-test-opt-upd-aa2aea7f-3163-439e-a27b-60ad297fb5e9
STEP: Creating configMap with name cm-test-opt-create-110e5904-8e42-4cb5-8a07-74839e003bef
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 10 15:21:33.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3282" for this suite.
Feb 10 15:21:57.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 10 15:21:57.353: INFO: namespace configmap-3282 deletion completed in 24.127916865s

• [SLOW TEST:39.030 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSFeb 10 15:21:57.353: INFO: Running AfterSuite actions on all nodes
Feb 10 15:21:57.353: INFO: Running AfterSuite actions on node 1
Feb 10 15:21:57.353: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8756.382 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS