I0207 12:56:13.976513 8 e2e.go:243] Starting e2e run "ab51569a-a689-419f-97fc-36fb3979a759" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581080172 - Will randomize all specs Will run 215 of 4412 specs Feb 7 12:56:14.202: INFO: >>> kubeConfig: /root/.kube/config Feb 7 12:56:14.205: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 7 12:56:14.230: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 7 12:56:14.265: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 7 12:56:14.265: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 7 12:56:14.265: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 7 12:56:14.276: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 7 12:56:14.276: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 7 12:56:14.276: INFO: e2e test version: v1.15.7 Feb 7 12:56:14.282: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:56:14.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 7 12:56:14.432: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Feb 7 12:56:14.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4814 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 7 12:56:26.962: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0207 12:56:25.806482 30 log.go:172] (0xc0006bef20) (0xc0004f23c0) Create stream\nI0207 12:56:25.806658 30 log.go:172] (0xc0006bef20) (0xc0004f23c0) Stream added, broadcasting: 1\nI0207 12:56:25.813333 30 log.go:172] (0xc0006bef20) Reply frame received for 1\nI0207 12:56:25.813406 30 log.go:172] (0xc0006bef20) (0xc0000d4280) Create stream\nI0207 12:56:25.813432 30 log.go:172] (0xc0006bef20) (0xc0000d4280) Stream added, broadcasting: 3\nI0207 12:56:25.814915 30 log.go:172] (0xc0006bef20) Reply frame received for 3\nI0207 12:56:25.814945 30 log.go:172] (0xc0006bef20) (0xc0004f2460) Create stream\nI0207 12:56:25.814956 30 log.go:172] (0xc0006bef20) (0xc0004f2460) Stream added, broadcasting: 5\nI0207 12:56:25.817722 30 log.go:172] (0xc0006bef20) Reply frame received for 5\nI0207 12:56:25.817784 30 log.go:172] (0xc0006bef20) (0xc0004f2500) Create stream\nI0207 12:56:25.817796 30 log.go:172] (0xc0006bef20) (0xc0004f2500) Stream added, broadcasting: 7\nI0207 12:56:25.819893 30 log.go:172] (0xc0006bef20) Reply frame received for 7\nI0207 12:56:25.820305 30 log.go:172] (0xc0000d4280) (3) Writing data frame\nI0207 12:56:25.820478 30 log.go:172] (0xc0000d4280) (3) Writing data frame\nI0207 12:56:25.832566 30 log.go:172] (0xc0006bef20) Data frame received for 5\nI0207 12:56:25.832652 30 log.go:172] (0xc0004f2460) (5) Data frame handling\nI0207 12:56:25.832676 30 log.go:172] (0xc0004f2460) (5) Data frame sent\nI0207 12:56:25.836728 30 log.go:172] (0xc0006bef20) Data frame received for 5\nI0207 12:56:25.836778 30 log.go:172] (0xc0004f2460) (5) Data frame handling\nI0207 12:56:25.836797 30 log.go:172] (0xc0004f2460) (5) Data frame sent\nI0207 12:56:26.909487 30 log.go:172] (0xc0006bef20) (0xc0000d4280) Stream removed, broadcasting: 3\nI0207 12:56:26.909862 30 log.go:172] (0xc0006bef20) Data frame received for 1\nI0207 12:56:26.909919 30 log.go:172] (0xc0004f23c0) (1) Data frame handling\nI0207 12:56:26.909999 30 log.go:172] (0xc0004f23c0) (1) Data frame sent\nI0207 12:56:26.910139 30 log.go:172] (0xc0006bef20) (0xc0004f2460) Stream removed, broadcasting: 5\nI0207 12:56:26.910313 30 log.go:172] (0xc0006bef20) (0xc0004f2500) Stream removed, broadcasting: 7\nI0207 12:56:26.910430 30 log.go:172] (0xc0006bef20) (0xc0004f23c0) Stream removed, broadcasting: 1\nI0207 12:56:26.910487 30 log.go:172] (0xc0006bef20) Go away received\nI0207 12:56:26.910761 30 log.go:172] (0xc0006bef20) (0xc0004f23c0) Stream removed, broadcasting: 1\nI0207 12:56:26.910783 30 log.go:172] (0xc0006bef20) (0xc0000d4280) Stream removed, broadcasting: 3\nI0207 12:56:26.910800 30 log.go:172] (0xc0006bef20) (0xc0004f2460) Stream removed, broadcasting: 5\nI0207 12:56:26.910819 30 log.go:172] (0xc0006bef20) (0xc0004f2500) Stream removed, broadcasting: 7\n" Feb 7 12:56:26.962: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:56:28.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4814" for this suite. Feb 7 12:56:35.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:56:35.185: INFO: namespace kubectl-4814 deletion completed in 6.192336795s • [SLOW TEST:20.903 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:56:35.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 7 12:56:43.441: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-87b0425e-304d-43c1-9540-5a6c9d093ad4,GenerateName:,Namespace:events-7492,SelfLink:/api/v1/namespaces/events-7492/pods/send-events-87b0425e-304d-43c1-9540-5a6c9d093ad4,UID:9dcf15f4-f774-4fab-b50a-8eadb1247a4a,ResourceVersion:23440129,Generation:0,CreationTimestamp:2020-02-07 12:56:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 306454580,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bf8rw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bf8rw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bf8rw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002728340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002728360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:56:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:56:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:56:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:56:35 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-07 12:56:35 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-07 12:56:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://788cfcf1d83bb6fc0a2d7552fb8f100e68d68cf75ecafe7560d06c7a60b0de16}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 7 12:56:45.450: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 7 12:56:47.461: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:56:47.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7492" for this suite. Feb 7 12:57:27.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:57:27.646: INFO: namespace events-7492 deletion completed in 40.166474192s • [SLOW TEST:52.461 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:57:27.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 7 12:57:27.911: INFO: Waiting up to 5m0s for pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de" in namespace "downward-api-900" to be "success or failure" Feb 7 12:57:27.946: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de": Phase="Pending", Reason="", readiness=false. Elapsed: 35.180285ms Feb 7 12:57:29.953: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041701718s Feb 7 12:57:31.961: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049953745s Feb 7 12:57:33.971: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060485811s Feb 7 12:57:35.984: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073149165s Feb 7 12:57:37.989: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078275745s STEP: Saw pod success Feb 7 12:57:37.989: INFO: Pod "downward-api-4e9f861f-91fb-417c-8786-e333060502de" satisfied condition "success or failure" Feb 7 12:57:37.991: INFO: Trying to get logs from node iruya-node pod downward-api-4e9f861f-91fb-417c-8786-e333060502de container dapi-container: STEP: delete the pod Feb 7 12:57:38.044: INFO: Waiting for pod downward-api-4e9f861f-91fb-417c-8786-e333060502de to disappear Feb 7 12:57:38.059: INFO: Pod downward-api-4e9f861f-91fb-417c-8786-e333060502de no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:57:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-900" for this suite. Feb 7 12:57:44.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:57:44.194: INFO: namespace downward-api-900 deletion completed in 6.127941957s • [SLOW TEST:16.548 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:57:44.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 7 12:57:56.578: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:57:57.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9844" for this suite. Feb 7 12:58:19.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:58:19.847: INFO: namespace replicaset-9844 deletion completed in 22.220888436s • [SLOW TEST:35.652 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:58:19.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2d9ed2a7-101f-491c-a100-61c26a866fe7 STEP: Creating a pod to test consume configMaps Feb 7 12:58:20.088: INFO: Waiting up to 5m0s for pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121" in namespace "configmap-8984" to be "success or failure" Feb 7 12:58:20.111: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121": Phase="Pending", Reason="", readiness=false. Elapsed: 22.970185ms Feb 7 12:58:22.130: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041613807s Feb 7 12:58:24.154: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065706059s Feb 7 12:58:26.182: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093674721s Feb 7 12:58:28.192: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104368825s Feb 7 12:58:30.201: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11289906s STEP: Saw pod success Feb 7 12:58:30.201: INFO: Pod "pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121" satisfied condition "success or failure" Feb 7 12:58:30.207: INFO: Trying to get logs from node iruya-node pod pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121 container configmap-volume-test: STEP: delete the pod Feb 7 12:58:30.253: INFO: Waiting for pod pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121 to disappear Feb 7 12:58:30.258: INFO: Pod pod-configmaps-33a163e7-8586-43cf-8a52-4e8b60956121 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:58:30.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8984" for this suite. Feb 7 12:58:36.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:58:36.420: INFO: namespace configmap-8984 deletion completed in 6.157346543s • [SLOW TEST:16.573 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:58:36.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-072980b7-8e62-4ca3-b32e-d41698a984ba STEP: Creating a pod to test consume secrets Feb 7 12:58:36.561: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af" in namespace "projected-705" to be "success or failure" Feb 7 12:58:36.584: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af": Phase="Pending", Reason="", readiness=false. Elapsed: 23.647199ms Feb 7 12:58:38.598: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036943185s Feb 7 12:58:40.607: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046518406s Feb 7 12:58:42.625: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0646764s Feb 7 12:58:44.644: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083780903s Feb 7 12:58:46.652: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091449894s STEP: Saw pod success Feb 7 12:58:46.652: INFO: Pod "pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af" satisfied condition "success or failure" Feb 7 12:58:46.656: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af container projected-secret-volume-test: STEP: delete the pod Feb 7 12:58:47.126: INFO: Waiting for pod pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af to disappear Feb 7 12:58:47.150: INFO: Pod pod-projected-secrets-fa181b09-a845-4fc1-a4d2-6b572f6c22af no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:58:47.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-705" for this suite. Feb 7 12:58:53.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:58:53.364: INFO: namespace projected-705 deletion completed in 6.206196251s • [SLOW TEST:16.944 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:58:53.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 7 12:58:53.554: INFO: Waiting up to 5m0s for pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea" in namespace "containers-5329" to be "success or failure" Feb 7 12:58:53.563: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048523ms Feb 7 12:58:55.570: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015630496s Feb 7 12:58:57.586: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031278385s Feb 7 12:58:59.593: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038559937s Feb 7 12:59:01.604: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048789344s Feb 7 12:59:03.613: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058150044s STEP: Saw pod success Feb 7 12:59:03.613: INFO: Pod "client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea" satisfied condition "success or failure" Feb 7 12:59:03.616: INFO: Trying to get logs from node iruya-node pod client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea container test-container: STEP: delete the pod Feb 7 12:59:03.666: INFO: Waiting for pod client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea to disappear Feb 7 12:59:03.771: INFO: Pod client-containers-5281aba3-fa20-405f-ace3-ab8468e992ea no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:59:03.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5329" for this suite. Feb 7 12:59:09.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:59:09.942: INFO: namespace containers-5329 deletion completed in 6.152176167s • [SLOW TEST:16.578 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:59:09.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:59:10.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8145" for this suite. Feb 7 12:59:32.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:59:32.218: INFO: namespace pods-8145 deletion completed in 22.143354376s • [SLOW TEST:22.274 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:59:32.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-682d23e2-26f6-4fbf-be15-559029c52068 STEP: Creating a pod to test consume secrets Feb 7 12:59:32.327: INFO: Waiting up to 5m0s for pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc" in namespace "secrets-5230" to be "success or failure" Feb 7 12:59:32.332: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.00311ms Feb 7 12:59:34.341: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014518589s Feb 7 12:59:36.351: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024151186s Feb 7 12:59:38.357: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029930779s Feb 7 12:59:40.365: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038600848s Feb 7 12:59:42.373: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046404904s STEP: Saw pod success Feb 7 12:59:42.373: INFO: Pod "pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc" satisfied condition "success or failure" Feb 7 12:59:42.377: INFO: Trying to get logs from node iruya-node pod pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc container secret-volume-test: STEP: delete the pod Feb 7 12:59:42.459: INFO: Waiting for pod pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc to disappear Feb 7 12:59:42.471: INFO: Pod pod-secrets-89d54590-365c-40d8-b742-44df076dfcdc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 12:59:42.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5230" for this suite. Feb 7 12:59:48.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 12:59:48.866: INFO: namespace secrets-5230 deletion completed in 6.388194011s • [SLOW TEST:16.648 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 12:59:48.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 7 12:59:49.026: INFO: Waiting up to 5m0s for pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89" in namespace "emptydir-3598" to be "success or failure" Feb 7 12:59:49.061: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Pending", Reason="", readiness=false. Elapsed: 34.405908ms Feb 7 12:59:51.069: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042386267s Feb 7 12:59:53.096: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069435011s Feb 7 12:59:55.267: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240461773s Feb 7 12:59:57.279: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251982165s Feb 7 12:59:59.302: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Pending", Reason="", readiness=false. Elapsed: 10.2754874s Feb 7 13:00:01.315: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.288833837s STEP: Saw pod success Feb 7 13:00:01.316: INFO: Pod "pod-e86749b3-a2c9-4f54-8727-b99fbd965e89" satisfied condition "success or failure" Feb 7 13:00:01.326: INFO: Trying to get logs from node iruya-node pod pod-e86749b3-a2c9-4f54-8727-b99fbd965e89 container test-container: STEP: delete the pod Feb 7 13:00:01.383: INFO: Waiting for pod pod-e86749b3-a2c9-4f54-8727-b99fbd965e89 to disappear Feb 7 13:00:01.391: INFO: Pod pod-e86749b3-a2c9-4f54-8727-b99fbd965e89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 7 13:00:01.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3598" for this suite. Feb 7 13:00:07.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 13:00:07.567: INFO: namespace emptydir-3598 deletion completed in 6.169343718s • [SLOW TEST:18.701 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 7 13:00:07.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 7 13:00:07.758: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 13.942507ms)
Feb  7 13:00:07.762: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.270937ms)
Feb  7 13:00:07.766: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.284973ms)
Feb  7 13:00:07.771: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.735255ms)
Feb  7 13:00:07.783: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.846742ms)
Feb  7 13:00:07.788: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.609656ms)
Feb  7 13:00:07.792: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.934235ms)
Feb  7 13:00:07.805: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.259177ms)
Feb  7 13:00:07.832: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.683262ms)
Feb  7 13:00:07.841: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.377828ms)
Feb  7 13:00:07.846: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.922684ms)
Feb  7 13:00:07.851: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.029967ms)
Feb  7 13:00:07.855: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.247538ms)
Feb  7 13:00:07.860: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.661109ms)
Feb  7 13:00:07.879: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.973421ms)
Feb  7 13:00:07.887: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.940919ms)
Feb  7 13:00:07.892: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.796512ms)
Feb  7 13:00:07.897: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.465473ms)
Feb  7 13:00:07.903: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.381289ms)
Feb  7 13:00:07.911: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.173426ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:00:07.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5843" for this suite.
Feb  7 13:00:14.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:00:14.115: INFO: namespace proxy-5843 deletion completed in 6.198014664s

• [SLOW TEST:6.548 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:00:14.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 13:00:14.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3987'
Feb  7 13:00:14.311: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 13:00:14.311: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  7 13:00:14.341: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  7 13:00:14.366: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  7 13:00:14.472: INFO: scanned /root for discovery docs: 
Feb  7 13:00:14.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3987'
Feb  7 13:00:37.681: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  7 13:00:37.681: INFO: stdout: "Created e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c\nScaling up e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  7 13:00:37.681: INFO: stdout: "Created e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c\nScaling up e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  7 13:00:37.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3987'
Feb  7 13:00:37.832: INFO: stderr: ""
Feb  7 13:00:37.832: INFO: stdout: "e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c-fx9r6 "
Feb  7 13:00:37.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c-fx9r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3987'
Feb  7 13:00:37.973: INFO: stderr: ""
Feb  7 13:00:37.973: INFO: stdout: "true"
Feb  7 13:00:37.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c-fx9r6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3987'
Feb  7 13:00:38.103: INFO: stderr: ""
Feb  7 13:00:38.103: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  7 13:00:38.103: INFO: e2e-test-nginx-rc-5bacceb4ffe4aae7f7c48e8153de228c-fx9r6 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  7 13:00:38.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3987'
Feb  7 13:00:38.195: INFO: stderr: ""
Feb  7 13:00:38.195: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:00:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3987" for this suite.
Feb  7 13:00:52.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:00:52.346: INFO: namespace kubectl-3987 deletion completed in 14.145656131s

• [SLOW TEST:38.230 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:00:52.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:00:52.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:01:03.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3142" for this suite.
Feb  7 13:01:49.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:01:49.381: INFO: namespace pods-3142 deletion completed in 46.284285243s

• [SLOW TEST:57.035 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:01:49.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-05c07a4e-bd4f-41ff-9414-27282a9e2fff
STEP: Creating a pod to test consume secrets
Feb  7 13:01:49.497: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50" in namespace "projected-7331" to be "success or failure"
Feb  7 13:01:49.514: INFO: Pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50": Phase="Pending", Reason="", readiness=false. Elapsed: 17.289323ms
Feb  7 13:01:51.520: INFO: Pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023744579s
Feb  7 13:01:53.526: INFO: Pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029784478s
Feb  7 13:01:55.534: INFO: Pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037443719s
Feb  7 13:01:57.541: INFO: Pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044703266s
STEP: Saw pod success
Feb  7 13:01:57.541: INFO: Pod "pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50" satisfied condition "success or failure"
Feb  7 13:01:57.544: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 13:01:57.625: INFO: Waiting for pod pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50 to disappear
Feb  7 13:01:57.654: INFO: Pod pod-projected-secrets-36589036-669a-4251-b8ac-ee3382bc9b50 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:01:57.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7331" for this suite.
Feb  7 13:02:03.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:02:03.891: INFO: namespace projected-7331 deletion completed in 6.231850385s

• [SLOW TEST:14.510 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:02:03.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3542 to expose endpoints map[]
Feb  7 13:02:04.179: INFO: Get endpoints failed (20.615583ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  7 13:02:05.187: INFO: successfully validated that service endpoint-test2 in namespace services-3542 exposes endpoints map[] (1.028018289s elapsed)
STEP: Creating pod pod1 in namespace services-3542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3542 to expose endpoints map[pod1:[80]]
Feb  7 13:02:09.433: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.227232s elapsed, will retry)
Feb  7 13:02:12.556: INFO: successfully validated that service endpoint-test2 in namespace services-3542 exposes endpoints map[pod1:[80]] (7.350930522s elapsed)
STEP: Creating pod pod2 in namespace services-3542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3542 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  7 13:02:17.896: INFO: Unexpected endpoints: found map[f00935c0-e21d-42a6-9b8d-125ba655a180:[80]], expected map[pod1:[80] pod2:[80]] (5.325984149s elapsed, will retry)
Feb  7 13:02:21.517: INFO: successfully validated that service endpoint-test2 in namespace services-3542 exposes endpoints map[pod1:[80] pod2:[80]] (8.947184998s elapsed)
STEP: Deleting pod pod1 in namespace services-3542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3542 to expose endpoints map[pod2:[80]]
Feb  7 13:02:22.611: INFO: successfully validated that service endpoint-test2 in namespace services-3542 exposes endpoints map[pod2:[80]] (1.087678153s elapsed)
STEP: Deleting pod pod2 in namespace services-3542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3542 to expose endpoints map[]
Feb  7 13:02:24.779: INFO: successfully validated that service endpoint-test2 in namespace services-3542 exposes endpoints map[] (2.150543888s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:02:25.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3542" for this suite.
Feb  7 13:02:47.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:02:47.283: INFO: namespace services-3542 deletion completed in 22.140877443s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:43.392 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:02:47.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  7 13:02:47.336: INFO: Waiting up to 5m0s for pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef" in namespace "emptydir-9031" to be "success or failure"
Feb  7 13:02:47.387: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef": Phase="Pending", Reason="", readiness=false. Elapsed: 51.362612ms
Feb  7 13:02:49.394: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058051628s
Feb  7 13:02:51.403: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066764147s
Feb  7 13:02:53.409: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073360427s
Feb  7 13:02:55.419: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082903252s
Feb  7 13:02:57.427: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090625919s
STEP: Saw pod success
Feb  7 13:02:57.427: INFO: Pod "pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef" satisfied condition "success or failure"
Feb  7 13:02:57.431: INFO: Trying to get logs from node iruya-node pod pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef container test-container: 
STEP: delete the pod
Feb  7 13:02:57.492: INFO: Waiting for pod pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef to disappear
Feb  7 13:02:57.498: INFO: Pod pod-d7e7c918-551e-4c10-94df-4c4ec8ad8fef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:02:57.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9031" for this suite.
Feb  7 13:03:03.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:03:03.667: INFO: namespace emptydir-9031 deletion completed in 6.161197232s

• [SLOW TEST:16.383 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:03:03.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:03:57.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7875" for this suite.
Feb  7 13:04:03.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:04:03.370: INFO: namespace container-runtime-7875 deletion completed in 6.274709518s

• [SLOW TEST:59.703 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:04:03.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  7 13:04:03.480: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 13:04:03.501: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 13:04:03.539: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  7 13:04:03.552: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.552: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 13:04:03.552: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  7 13:04:03.552: INFO: 	Container weave ready: true, restart count 0
Feb  7 13:04:03.552: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 13:04:03.552: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  7 13:04:03.573: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  7 13:04:03.573: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container coredns ready: true, restart count 0
Feb  7 13:04:03.573: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container etcd ready: true, restart count 0
Feb  7 13:04:03.573: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container weave ready: true, restart count 0
Feb  7 13:04:03.573: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 13:04:03.573: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container coredns ready: true, restart count 0
Feb  7 13:04:03.573: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  7 13:04:03.573: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 13:04:03.573: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  7 13:04:03.573: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  7 13:04:03.659: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  7 13:04:03.659: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370.15f12077f85eb31d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8925/filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370.15f1207941790f0e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370.15f1207a5d957d09], Reason = [Created], Message = [Created container filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370.15f1207a83595f1d], Reason = [Started], Message = [Started container filler-pod-a92f84a9-bfe7-4313-9d8f-8e5610f7b370]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d.15f12077f71fd144], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8925/filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d.15f12079456dae28], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d.15f1207a6cd26569], Reason = [Created], Message = [Created container filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d.15f1207a87a17aeb], Reason = [Started], Message = [Started container filler-pod-ff361a2a-0387-46b6-a096-d47c114eac5d]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f1207ac5754751], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:04:17.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8925" for this suite.
Feb  7 13:04:24.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:04:24.652: INFO: namespace sched-pred-8925 deletion completed in 7.623796128s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.281 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:04:24.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  7 13:04:24.847: INFO: PodSpec: initContainers in spec.initContainers
Feb  7 13:05:25.380: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-73384f1a-876f-456c-9239-5c700a89b1f4", GenerateName:"", Namespace:"init-container-7741", SelfLink:"/api/v1/namespaces/init-container-7741/pods/pod-init-73384f1a-876f-456c-9239-5c700a89b1f4", UID:"cec1fe0f-c6ff-4b3f-8148-b473cf8bbf14", ResourceVersion:"23441400", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716677464, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"847447214"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zglxm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026798c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zglxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zglxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zglxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0004a9a08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00246aa20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0004a9a90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0004a9b20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0004a9b28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0004a9b2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677465, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677465, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677465, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc000c525e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00279e540)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00279e5b0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ce02fa0637e8ce6560b88a0ed29a2b91a0072534b28628443fad279018f19cf7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c52800), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c52720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:05:25.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7741" for this suite.
Feb  7 13:05:47.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:05:47.514: INFO: namespace init-container-7741 deletion completed in 22.107223868s

• [SLOW TEST:82.862 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:05:47.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 13:05:47.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9987'
Feb  7 13:05:47.714: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 13:05:47.714: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  7 13:05:51.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9987'
Feb  7 13:05:51.908: INFO: stderr: ""
Feb  7 13:05:51.908: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:05:51.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9987" for this suite.
Feb  7 13:06:13.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:06:14.078: INFO: namespace kubectl-9987 deletion completed in 22.161022988s

• [SLOW TEST:26.564 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:06:14.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6885
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6885
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6885
Feb  7 13:06:14.222: INFO: Found 0 stateful pods, waiting for 1
Feb  7 13:06:24.229: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  7 13:06:24.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:06:26.313: INFO: stderr: "I0207 13:06:25.943319     211 log.go:172] (0xc0006d04d0) (0xc0006ec640) Create stream\nI0207 13:06:25.943432     211 log.go:172] (0xc0006d04d0) (0xc0006ec640) Stream added, broadcasting: 1\nI0207 13:06:25.952216     211 log.go:172] (0xc0006d04d0) Reply frame received for 1\nI0207 13:06:25.952269     211 log.go:172] (0xc0006d04d0) (0xc0005b6140) Create stream\nI0207 13:06:25.952280     211 log.go:172] (0xc0006d04d0) (0xc0005b6140) Stream added, broadcasting: 3\nI0207 13:06:25.953834     211 log.go:172] (0xc0006d04d0) Reply frame received for 3\nI0207 13:06:25.953881     211 log.go:172] (0xc0006d04d0) (0xc00096a000) Create stream\nI0207 13:06:25.953903     211 log.go:172] (0xc0006d04d0) (0xc00096a000) Stream added, broadcasting: 5\nI0207 13:06:25.955322     211 log.go:172] (0xc0006d04d0) Reply frame received for 5\nI0207 13:06:26.069365     211 log.go:172] (0xc0006d04d0) Data frame received for 5\nI0207 13:06:26.069456     211 log.go:172] (0xc00096a000) (5) Data frame handling\nI0207 13:06:26.069483     211 log.go:172] (0xc00096a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 13:06:26.136899     211 log.go:172] (0xc0006d04d0) Data frame received for 3\nI0207 13:06:26.136983     211 log.go:172] (0xc0005b6140) (3) Data frame handling\nI0207 13:06:26.137000     211 log.go:172] (0xc0005b6140) (3) Data frame sent\nI0207 13:06:26.297413     211 log.go:172] (0xc0006d04d0) Data frame received for 1\nI0207 13:06:26.297548     211 log.go:172] (0xc0006d04d0) (0xc0005b6140) Stream removed, broadcasting: 3\nI0207 13:06:26.297580     211 log.go:172] (0xc0006ec640) (1) Data frame handling\nI0207 13:06:26.297592     211 log.go:172] (0xc0006ec640) (1) Data frame sent\nI0207 13:06:26.297719     211 log.go:172] (0xc0006d04d0) (0xc00096a000) Stream removed, broadcasting: 5\nI0207 13:06:26.297764     211 log.go:172] (0xc0006d04d0) (0xc0006ec640) Stream removed, broadcasting: 1\nI0207 13:06:26.297789     211 log.go:172] (0xc0006d04d0) Go away received\nI0207 13:06:26.298581     211 log.go:172] (0xc0006d04d0) (0xc0006ec640) Stream removed, broadcasting: 1\nI0207 13:06:26.298953     211 log.go:172] (0xc0006d04d0) (0xc0005b6140) Stream removed, broadcasting: 3\nI0207 13:06:26.299063     211 log.go:172] (0xc0006d04d0) (0xc00096a000) Stream removed, broadcasting: 5\n"
Feb  7 13:06:26.314: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:06:26.314: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:06:26.330: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  7 13:06:36.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:06:36.347: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 13:06:36.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.9999998s
Feb  7 13:06:37.905: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.469921911s
Feb  7 13:06:38.933: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.459836785s
Feb  7 13:06:39.941: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.432090337s
Feb  7 13:06:40.961: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.423606815s
Feb  7 13:06:41.969: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.403931042s
Feb  7 13:06:42.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.395539653s
Feb  7 13:06:43.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.388162869s
Feb  7 13:06:45.009: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.36637228s
Feb  7 13:06:46.020: INFO: Verifying statefulset ss doesn't scale past 1 for another 355.401818ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6885
Feb  7 13:06:47.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:06:47.495: INFO: stderr: "I0207 13:06:47.196722     242 log.go:172] (0xc000a28370) (0xc0009005a0) Create stream\nI0207 13:06:47.196838     242 log.go:172] (0xc000a28370) (0xc0009005a0) Stream added, broadcasting: 1\nI0207 13:06:47.201475     242 log.go:172] (0xc000a28370) Reply frame received for 1\nI0207 13:06:47.201510     242 log.go:172] (0xc000a28370) (0xc000702000) Create stream\nI0207 13:06:47.201519     242 log.go:172] (0xc000a28370) (0xc000702000) Stream added, broadcasting: 3\nI0207 13:06:47.202798     242 log.go:172] (0xc000a28370) Reply frame received for 3\nI0207 13:06:47.202836     242 log.go:172] (0xc000a28370) (0xc0007521e0) Create stream\nI0207 13:06:47.202844     242 log.go:172] (0xc000a28370) (0xc0007521e0) Stream added, broadcasting: 5\nI0207 13:06:47.204294     242 log.go:172] (0xc000a28370) Reply frame received for 5\nI0207 13:06:47.327494     242 log.go:172] (0xc000a28370) Data frame received for 5\nI0207 13:06:47.327551     242 log.go:172] (0xc0007521e0) (5) Data frame handling\nI0207 13:06:47.327582     242 log.go:172] (0xc0007521e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 13:06:47.337292     242 log.go:172] (0xc000a28370) Data frame received for 3\nI0207 13:06:47.337306     242 log.go:172] (0xc000702000) (3) Data frame handling\nI0207 13:06:47.337322     242 log.go:172] (0xc000702000) (3) Data frame sent\nI0207 13:06:47.478809     242 log.go:172] (0xc000a28370) (0xc000702000) Stream removed, broadcasting: 3\nI0207 13:06:47.478934     242 log.go:172] (0xc000a28370) Data frame received for 1\nI0207 13:06:47.478955     242 log.go:172] (0xc0009005a0) (1) Data frame handling\nI0207 13:06:47.478981     242 log.go:172] (0xc0009005a0) (1) Data frame sent\nI0207 13:06:47.479033     242 log.go:172] (0xc000a28370) (0xc0007521e0) Stream removed, broadcasting: 5\nI0207 13:06:47.479072     242 log.go:172] (0xc000a28370) (0xc0009005a0) Stream removed, broadcasting: 1\nI0207 13:06:47.479098     242 log.go:172] (0xc000a28370) Go away received\nI0207 13:06:47.479852     242 log.go:172] (0xc000a28370) (0xc0009005a0) Stream removed, broadcasting: 1\nI0207 13:06:47.479944     242 log.go:172] (0xc000a28370) (0xc000702000) Stream removed, broadcasting: 3\nI0207 13:06:47.479973     242 log.go:172] (0xc000a28370) (0xc0007521e0) Stream removed, broadcasting: 5\n"
Feb  7 13:06:47.496: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:06:47.496: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:06:47.505: INFO: Found 1 stateful pods, waiting for 3
Feb  7 13:06:57.509: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:06:57.509: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:06:57.509: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 13:07:07.517: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:07:07.517: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:07:07.517: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  7 13:07:07.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:07:08.168: INFO: stderr: "I0207 13:07:07.881354     264 log.go:172] (0xc000a14420) (0xc000842640) Create stream\nI0207 13:07:07.881423     264 log.go:172] (0xc000a14420) (0xc000842640) Stream added, broadcasting: 1\nI0207 13:07:07.891153     264 log.go:172] (0xc000a14420) Reply frame received for 1\nI0207 13:07:07.891218     264 log.go:172] (0xc000a14420) (0xc000818000) Create stream\nI0207 13:07:07.891246     264 log.go:172] (0xc000a14420) (0xc000818000) Stream added, broadcasting: 3\nI0207 13:07:07.893418     264 log.go:172] (0xc000a14420) Reply frame received for 3\nI0207 13:07:07.893448     264 log.go:172] (0xc000a14420) (0xc0005d01e0) Create stream\nI0207 13:07:07.893481     264 log.go:172] (0xc000a14420) (0xc0005d01e0) Stream added, broadcasting: 5\nI0207 13:07:07.895556     264 log.go:172] (0xc000a14420) Reply frame received for 5\nI0207 13:07:08.017684     264 log.go:172] (0xc000a14420) Data frame received for 5\nI0207 13:07:08.017719     264 log.go:172] (0xc0005d01e0) (5) Data frame handling\nI0207 13:07:08.017734     264 log.go:172] (0xc0005d01e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 13:07:08.022611     264 log.go:172] (0xc000a14420) Data frame received for 3\nI0207 13:07:08.022622     264 log.go:172] (0xc000818000) (3) Data frame handling\nI0207 13:07:08.022632     264 log.go:172] (0xc000818000) (3) Data frame sent\nI0207 13:07:08.161068     264 log.go:172] (0xc000a14420) (0xc000818000) Stream removed, broadcasting: 3\nI0207 13:07:08.161135     264 log.go:172] (0xc000a14420) Data frame received for 1\nI0207 13:07:08.161159     264 log.go:172] (0xc000842640) (1) Data frame handling\nI0207 13:07:08.161184     264 log.go:172] (0xc000842640) (1) Data frame sent\nI0207 13:07:08.161202     264 log.go:172] (0xc000a14420) (0xc0005d01e0) Stream removed, broadcasting: 5\nI0207 13:07:08.161247     264 log.go:172] (0xc000a14420) (0xc000842640) Stream removed, broadcasting: 1\nI0207 13:07:08.161268     264 log.go:172] (0xc000a14420) Go away received\nI0207 13:07:08.161847     264 log.go:172] (0xc000a14420) (0xc000842640) Stream removed, broadcasting: 1\nI0207 13:07:08.161858     264 log.go:172] (0xc000a14420) (0xc000818000) Stream removed, broadcasting: 3\nI0207 13:07:08.161865     264 log.go:172] (0xc000a14420) (0xc0005d01e0) Stream removed, broadcasting: 5\n"
Feb  7 13:07:08.169: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:07:08.169: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:07:08.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:07:08.649: INFO: stderr: "I0207 13:07:08.343360     284 log.go:172] (0xc000116dc0) (0xc00082a640) Create stream\nI0207 13:07:08.343465     284 log.go:172] (0xc000116dc0) (0xc00082a640) Stream added, broadcasting: 1\nI0207 13:07:08.347440     284 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0207 13:07:08.347468     284 log.go:172] (0xc000116dc0) (0xc0009ec000) Create stream\nI0207 13:07:08.347480     284 log.go:172] (0xc000116dc0) (0xc0009ec000) Stream added, broadcasting: 3\nI0207 13:07:08.348442     284 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0207 13:07:08.348469     284 log.go:172] (0xc000116dc0) (0xc000a3e000) Create stream\nI0207 13:07:08.348483     284 log.go:172] (0xc000116dc0) (0xc000a3e000) Stream added, broadcasting: 5\nI0207 13:07:08.349393     284 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0207 13:07:08.445976     284 log.go:172] (0xc000116dc0) Data frame received for 5\nI0207 13:07:08.446004     284 log.go:172] (0xc000a3e000) (5) Data frame handling\nI0207 13:07:08.446016     284 log.go:172] (0xc000a3e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 13:07:08.505902     284 log.go:172] (0xc000116dc0) Data frame received for 3\nI0207 13:07:08.505984     284 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0207 13:07:08.506010     284 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0207 13:07:08.633705     284 log.go:172] (0xc000116dc0) (0xc0009ec000) Stream removed, broadcasting: 3\nI0207 13:07:08.634086     284 log.go:172] (0xc000116dc0) Data frame received for 1\nI0207 13:07:08.634291     284 log.go:172] (0xc000116dc0) (0xc000a3e000) Stream removed, broadcasting: 5\nI0207 13:07:08.634441     284 log.go:172] (0xc00082a640) (1) Data frame handling\nI0207 13:07:08.634520     284 log.go:172] (0xc00082a640) (1) Data frame sent\nI0207 13:07:08.634537     284 log.go:172] (0xc000116dc0) (0xc00082a640) Stream removed, broadcasting: 1\nI0207 13:07:08.634663     284 log.go:172] (0xc000116dc0) Go away received\nI0207 13:07:08.635724     284 log.go:172] (0xc000116dc0) (0xc00082a640) Stream removed, broadcasting: 1\nI0207 13:07:08.635849     284 log.go:172] (0xc000116dc0) (0xc0009ec000) Stream removed, broadcasting: 3\nI0207 13:07:08.635880     284 log.go:172] (0xc000116dc0) (0xc000a3e000) Stream removed, broadcasting: 5\n"
Feb  7 13:07:08.649: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:07:08.649: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:07:08.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:07:09.283: INFO: stderr: "I0207 13:07:08.938796     305 log.go:172] (0xc000ab2370) (0xc000a1e5a0) Create stream\nI0207 13:07:08.938900     305 log.go:172] (0xc000ab2370) (0xc000a1e5a0) Stream added, broadcasting: 1\nI0207 13:07:08.958828     305 log.go:172] (0xc000ab2370) Reply frame received for 1\nI0207 13:07:08.958912     305 log.go:172] (0xc000ab2370) (0xc0009f0000) Create stream\nI0207 13:07:08.958928     305 log.go:172] (0xc000ab2370) (0xc0009f0000) Stream added, broadcasting: 3\nI0207 13:07:08.960902     305 log.go:172] (0xc000ab2370) Reply frame received for 3\nI0207 13:07:08.960923     305 log.go:172] (0xc000ab2370) (0xc000a1e640) Create stream\nI0207 13:07:08.960933     305 log.go:172] (0xc000ab2370) (0xc000a1e640) Stream added, broadcasting: 5\nI0207 13:07:08.965171     305 log.go:172] (0xc000ab2370) Reply frame received for 5\nI0207 13:07:09.099583     305 log.go:172] (0xc000ab2370) Data frame received for 5\nI0207 13:07:09.099801     305 log.go:172] (0xc000a1e640) (5) Data frame handling\nI0207 13:07:09.099852     305 log.go:172] (0xc000a1e640) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 13:07:09.147691     305 log.go:172] (0xc000ab2370) Data frame received for 3\nI0207 13:07:09.147757     305 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0207 13:07:09.147790     305 log.go:172] (0xc0009f0000) (3) Data frame sent\nI0207 13:07:09.274085     305 log.go:172] (0xc000ab2370) Data frame received for 1\nI0207 13:07:09.274271     305 log.go:172] (0xc000ab2370) (0xc0009f0000) Stream removed, broadcasting: 3\nI0207 13:07:09.274363     305 log.go:172] (0xc000a1e5a0) (1) Data frame handling\nI0207 13:07:09.274419     305 log.go:172] (0xc000a1e5a0) (1) Data frame sent\nI0207 13:07:09.274586     305 log.go:172] (0xc000ab2370) (0xc000a1e640) Stream removed, broadcasting: 5\nI0207 13:07:09.274705     305 log.go:172] (0xc000ab2370) (0xc000a1e5a0) Stream removed, broadcasting: 1\nI0207 13:07:09.274733     305 log.go:172] (0xc000ab2370) Go away received\nI0207 13:07:09.275586     305 log.go:172] (0xc000ab2370) (0xc000a1e5a0) Stream removed, broadcasting: 1\nI0207 13:07:09.275608     305 log.go:172] (0xc000ab2370) (0xc0009f0000) Stream removed, broadcasting: 3\nI0207 13:07:09.275621     305 log.go:172] (0xc000ab2370) (0xc000a1e640) Stream removed, broadcasting: 5\n"
Feb  7 13:07:09.283: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:07:09.283: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:07:09.283: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 13:07:09.288: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  7 13:07:19.299: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:07:19.299: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:07:19.299: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:07:19.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999773s
Feb  7 13:07:20.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983263897s
Feb  7 13:07:21.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968842626s
Feb  7 13:07:22.365: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959737444s
Feb  7 13:07:23.372: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949584986s
Feb  7 13:07:24.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942711899s
Feb  7 13:07:25.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.932932588s
Feb  7 13:07:26.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.331943345s
Feb  7 13:07:28.010: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.321144254s
Feb  7 13:07:29.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 303.981407ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6885
Feb  7 13:07:30.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:07:30.608: INFO: stderr: "I0207 13:07:30.232716     324 log.go:172] (0xc00012adc0) (0xc00048e780) Create stream\nI0207 13:07:30.232818     324 log.go:172] (0xc00012adc0) (0xc00048e780) Stream added, broadcasting: 1\nI0207 13:07:30.240591     324 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0207 13:07:30.240695     324 log.go:172] (0xc00012adc0) (0xc00048e820) Create stream\nI0207 13:07:30.240710     324 log.go:172] (0xc00012adc0) (0xc00048e820) Stream added, broadcasting: 3\nI0207 13:07:30.242543     324 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0207 13:07:30.242583     324 log.go:172] (0xc00012adc0) (0xc000710000) Create stream\nI0207 13:07:30.242605     324 log.go:172] (0xc00012adc0) (0xc000710000) Stream added, broadcasting: 5\nI0207 13:07:30.244666     324 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0207 13:07:30.371354     324 log.go:172] (0xc00012adc0) Data frame received for 5\nI0207 13:07:30.371412     324 log.go:172] (0xc000710000) (5) Data frame handling\nI0207 13:07:30.371425     324 log.go:172] (0xc000710000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 13:07:30.371441     324 log.go:172] (0xc00012adc0) Data frame received for 3\nI0207 13:07:30.371448     324 log.go:172] (0xc00048e820) (3) Data frame handling\nI0207 13:07:30.371458     324 log.go:172] (0xc00048e820) (3) Data frame sent\nI0207 13:07:30.596322     324 log.go:172] (0xc00012adc0) (0xc00048e820) Stream removed, broadcasting: 3\nI0207 13:07:30.596518     324 log.go:172] (0xc00012adc0) Data frame received for 1\nI0207 13:07:30.596572     324 log.go:172] (0xc00048e780) (1) Data frame handling\nI0207 13:07:30.596603     324 log.go:172] (0xc00048e780) (1) Data frame sent\nI0207 13:07:30.596703     324 log.go:172] (0xc00012adc0) (0xc00048e780) Stream removed, broadcasting: 1\nI0207 13:07:30.596818     324 log.go:172] (0xc00012adc0) (0xc000710000) Stream removed, broadcasting: 5\nI0207 13:07:30.596973     324 log.go:172] (0xc00012adc0) Go away received\nI0207 13:07:30.597514     324 log.go:172] (0xc00012adc0) (0xc00048e780) Stream removed, broadcasting: 1\nI0207 13:07:30.597579     324 log.go:172] (0xc00012adc0) (0xc00048e820) Stream removed, broadcasting: 3\nI0207 13:07:30.597593     324 log.go:172] (0xc00012adc0) (0xc000710000) Stream removed, broadcasting: 5\n"
Feb  7 13:07:30.608: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:07:30.608: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:07:30.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:07:31.016: INFO: stderr: "I0207 13:07:30.785997     345 log.go:172] (0xc00080e210) (0xc000778640) Create stream\nI0207 13:07:30.786162     345 log.go:172] (0xc00080e210) (0xc000778640) Stream added, broadcasting: 1\nI0207 13:07:30.788982     345 log.go:172] (0xc00080e210) Reply frame received for 1\nI0207 13:07:30.789011     345 log.go:172] (0xc00080e210) (0xc0007786e0) Create stream\nI0207 13:07:30.789019     345 log.go:172] (0xc00080e210) (0xc0007786e0) Stream added, broadcasting: 3\nI0207 13:07:30.789861     345 log.go:172] (0xc00080e210) Reply frame received for 3\nI0207 13:07:30.789887     345 log.go:172] (0xc00080e210) (0xc000778780) Create stream\nI0207 13:07:30.789892     345 log.go:172] (0xc00080e210) (0xc000778780) Stream added, broadcasting: 5\nI0207 13:07:30.790787     345 log.go:172] (0xc00080e210) Reply frame received for 5\nI0207 13:07:30.884189     345 log.go:172] (0xc00080e210) Data frame received for 5\nI0207 13:07:30.884266     345 log.go:172] (0xc000778780) (5) Data frame handling\nI0207 13:07:30.884281     345 log.go:172] (0xc000778780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 13:07:30.884468     345 log.go:172] (0xc00080e210) Data frame received for 3\nI0207 13:07:30.884492     345 log.go:172] (0xc0007786e0) (3) Data frame handling\nI0207 13:07:30.884502     345 log.go:172] (0xc0007786e0) (3) Data frame sent\nI0207 13:07:31.011303     345 log.go:172] (0xc00080e210) (0xc0007786e0) Stream removed, broadcasting: 3\nI0207 13:07:31.011485     345 log.go:172] (0xc00080e210) Data frame received for 1\nI0207 13:07:31.011507     345 log.go:172] (0xc000778640) (1) Data frame handling\nI0207 13:07:31.011516     345 log.go:172] (0xc000778640) (1) Data frame sent\nI0207 13:07:31.011522     345 log.go:172] (0xc00080e210) (0xc000778780) Stream removed, broadcasting: 5\nI0207 13:07:31.011542     345 log.go:172] (0xc00080e210) (0xc000778640) Stream removed, broadcasting: 1\nI0207 13:07:31.011556     345 log.go:172] (0xc00080e210) Go away received\nI0207 13:07:31.011794     345 log.go:172] (0xc00080e210) (0xc000778640) Stream removed, broadcasting: 1\nI0207 13:07:31.011804     345 log.go:172] (0xc00080e210) (0xc0007786e0) Stream removed, broadcasting: 3\nI0207 13:07:31.011808     345 log.go:172] (0xc00080e210) (0xc000778780) Stream removed, broadcasting: 5\n"
Feb  7 13:07:31.016: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:07:31.016: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:07:31.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6885 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:07:31.452: INFO: stderr: "I0207 13:07:31.193292     361 log.go:172] (0xc0009ca420) (0xc00073a640) Create stream\nI0207 13:07:31.193444     361 log.go:172] (0xc0009ca420) (0xc00073a640) Stream added, broadcasting: 1\nI0207 13:07:31.199329     361 log.go:172] (0xc0009ca420) Reply frame received for 1\nI0207 13:07:31.199383     361 log.go:172] (0xc0009ca420) (0xc0005801e0) Create stream\nI0207 13:07:31.199395     361 log.go:172] (0xc0009ca420) (0xc0005801e0) Stream added, broadcasting: 3\nI0207 13:07:31.200910     361 log.go:172] (0xc0009ca420) Reply frame received for 3\nI0207 13:07:31.200957     361 log.go:172] (0xc0009ca420) (0xc0008b6000) Create stream\nI0207 13:07:31.200973     361 log.go:172] (0xc0009ca420) (0xc0008b6000) Stream added, broadcasting: 5\nI0207 13:07:31.202343     361 log.go:172] (0xc0009ca420) Reply frame received for 5\nI0207 13:07:31.298838     361 log.go:172] (0xc0009ca420) Data frame received for 5\nI0207 13:07:31.298946     361 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0207 13:07:31.298965     361 log.go:172] (0xc0008b6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 13:07:31.298988     361 log.go:172] (0xc0009ca420) Data frame received for 3\nI0207 13:07:31.299001     361 log.go:172] (0xc0005801e0) (3) Data frame handling\nI0207 13:07:31.299023     361 log.go:172] (0xc0005801e0) (3) Data frame sent\nI0207 13:07:31.431225     361 log.go:172] (0xc0009ca420) (0xc0005801e0) Stream removed, broadcasting: 3\nI0207 13:07:31.431681     361 log.go:172] (0xc0009ca420) Data frame received for 1\nI0207 13:07:31.431756     361 log.go:172] (0xc00073a640) (1) Data frame handling\nI0207 13:07:31.431797     361 log.go:172] (0xc0009ca420) (0xc0008b6000) Stream removed, broadcasting: 5\nI0207 13:07:31.431962     361 log.go:172] (0xc00073a640) (1) Data frame sent\nI0207 13:07:31.432010     361 log.go:172] (0xc0009ca420) (0xc00073a640) Stream removed, broadcasting: 1\nI0207 13:07:31.432621     361 log.go:172] (0xc0009ca420) Go away received\nI0207 13:07:31.433081     361 log.go:172] (0xc0009ca420) (0xc00073a640) Stream removed, broadcasting: 1\nI0207 13:07:31.433114     361 log.go:172] (0xc0009ca420) (0xc0005801e0) Stream removed, broadcasting: 3\nI0207 13:07:31.433132     361 log.go:172] (0xc0009ca420) (0xc0008b6000) Stream removed, broadcasting: 5\n"
Feb  7 13:07:31.452: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:07:31.452: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:07:31.452: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  7 13:08:11.504: INFO: Deleting all statefulset in ns statefulset-6885
Feb  7 13:08:11.512: INFO: Scaling statefulset ss to 0
Feb  7 13:08:11.528: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 13:08:11.531: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:08:11.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6885" for this suite.
Feb  7 13:08:17.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:08:17.830: INFO: namespace statefulset-6885 deletion completed in 6.207378566s

• [SLOW TEST:123.752 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:08:17.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:08:17.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:08:28.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5210" for this suite.
Feb  7 13:09:20.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:09:20.144: INFO: namespace pods-5210 deletion completed in 52.105488234s

• [SLOW TEST:62.315 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:09:20.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732
Feb  7 13:09:20.249: INFO: Pod name my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732: Found 0 pods out of 1
Feb  7 13:09:25.259: INFO: Pod name my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732: Found 1 pods out of 1
Feb  7 13:09:25.259: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732" are running
Feb  7 13:09:29.282: INFO: Pod "my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732-btkdb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 13:09:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 13:09:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 13:09:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 13:09:20 +0000 UTC Reason: Message:}])
Feb  7 13:09:29.282: INFO: Trying to dial the pod
Feb  7 13:09:34.372: INFO: Controller my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732: Got expected result from replica 1 [my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732-btkdb]: "my-hostname-basic-785a8f34-363e-4c92-aafc-3de9e4c02732-btkdb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:09:34.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4392" for this suite.
Feb  7 13:09:40.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:09:40.461: INFO: namespace replication-controller-4392 deletion completed in 6.083273433s

• [SLOW TEST:20.317 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:09:40.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  7 13:09:40.607: INFO: Waiting up to 5m0s for pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e" in namespace "emptydir-2598" to be "success or failure"
Feb  7 13:09:40.617: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.524058ms
Feb  7 13:09:42.624: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016796778s
Feb  7 13:09:44.637: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030155644s
Feb  7 13:09:46.654: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046845001s
Feb  7 13:09:48.667: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059981775s
Feb  7 13:09:50.674: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06698383s
Feb  7 13:09:52.681: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.074194001s
STEP: Saw pod success
Feb  7 13:09:52.681: INFO: Pod "pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e" satisfied condition "success or failure"
Feb  7 13:09:52.684: INFO: Trying to get logs from node iruya-node pod pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e container test-container: 
STEP: delete the pod
Feb  7 13:09:52.800: INFO: Waiting for pod pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e to disappear
Feb  7 13:09:52.805: INFO: Pod pod-108eb2bc-07d5-4c8a-a5d4-a24c55f0100e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:09:52.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2598" for this suite.
Feb  7 13:09:58.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:09:59.015: INFO: namespace emptydir-2598 deletion completed in 6.203622731s

• [SLOW TEST:18.553 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:09:59.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  7 13:09:59.241: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  7 13:09:59.593: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created
Feb  7 13:10:01.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:10:03.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:10:05.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:10:07.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677799, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:10:16.128: INFO: Waited 6.2703804s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:10:16.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5723" for this suite.
Feb  7 13:10:22.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:10:22.907: INFO: namespace aggregator-5723 deletion completed in 6.161364095s

• [SLOW TEST:23.891 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:10:22.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:10:22.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311" in namespace "downward-api-2463" to be "success or failure"
Feb  7 13:10:23.030: INFO: Pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311": Phase="Pending", Reason="", readiness=false. Elapsed: 41.161177ms
Feb  7 13:10:25.036: INFO: Pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046751878s
Feb  7 13:10:27.044: INFO: Pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055213693s
Feb  7 13:10:29.051: INFO: Pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062125585s
Feb  7 13:10:31.064: INFO: Pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075148689s
STEP: Saw pod success
Feb  7 13:10:31.064: INFO: Pod "downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311" satisfied condition "success or failure"
Feb  7 13:10:31.069: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311 container client-container: 
STEP: delete the pod
Feb  7 13:10:31.126: INFO: Waiting for pod downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311 to disappear
Feb  7 13:10:31.206: INFO: Pod downwardapi-volume-e7505cfa-5627-46ca-ab59-cfba1c682311 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:10:31.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2463" for this suite.
Feb  7 13:10:37.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:10:37.363: INFO: namespace downward-api-2463 deletion completed in 6.14947251s

• [SLOW TEST:14.456 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:10:37.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb  7 13:10:37.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  7 13:10:37.510: INFO: stderr: ""
Feb  7 13:10:37.510: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:10:37.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6631" for this suite.
Feb  7 13:10:43.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:10:43.688: INFO: namespace kubectl-6631 deletion completed in 6.172304682s

• [SLOW TEST:6.325 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:10:43.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6636a2e3-d3e7-434f-8644-a30136a3de68
STEP: Creating a pod to test consume configMaps
Feb  7 13:10:43.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527" in namespace "configmap-2644" to be "success or failure"
Feb  7 13:10:43.961: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527": Phase="Pending", Reason="", readiness=false. Elapsed: 22.912108ms
Feb  7 13:10:45.970: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031779582s
Feb  7 13:10:47.981: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04209746s
Feb  7 13:10:49.990: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051185109s
Feb  7 13:10:51.997: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058548022s
Feb  7 13:10:54.010: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071491938s
STEP: Saw pod success
Feb  7 13:10:54.010: INFO: Pod "pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527" satisfied condition "success or failure"
Feb  7 13:10:54.016: INFO: Trying to get logs from node iruya-node pod pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527 container configmap-volume-test: 
STEP: delete the pod
Feb  7 13:10:54.077: INFO: Waiting for pod pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527 to disappear
Feb  7 13:10:54.131: INFO: Pod pod-configmaps-18f59a84-b41f-45f8-9da3-d0c42b431527 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:10:54.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2644" for this suite.
Feb  7 13:11:00.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:11:00.301: INFO: namespace configmap-2644 deletion completed in 6.163931897s

• [SLOW TEST:16.613 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:11:00.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  7 13:11:01.205: INFO: Pod name wrapped-volume-race-181cf8cf-90cd-415a-bdb8-6bad7781780d: Found 0 pods out of 5
Feb  7 13:11:06.236: INFO: Pod name wrapped-volume-race-181cf8cf-90cd-415a-bdb8-6bad7781780d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-181cf8cf-90cd-415a-bdb8-6bad7781780d in namespace emptydir-wrapper-9809, will wait for the garbage collector to delete the pods
Feb  7 13:11:36.428: INFO: Deleting ReplicationController wrapped-volume-race-181cf8cf-90cd-415a-bdb8-6bad7781780d took: 9.431492ms
Feb  7 13:11:36.729: INFO: Terminating ReplicationController wrapped-volume-race-181cf8cf-90cd-415a-bdb8-6bad7781780d pods took: 300.398756ms
STEP: Creating RC which spawns configmap-volume pods
Feb  7 13:12:27.743: INFO: Pod name wrapped-volume-race-ff0971b8-1798-4973-83e0-955aadd9fc7c: Found 0 pods out of 5
Feb  7 13:12:32.755: INFO: Pod name wrapped-volume-race-ff0971b8-1798-4973-83e0-955aadd9fc7c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ff0971b8-1798-4973-83e0-955aadd9fc7c in namespace emptydir-wrapper-9809, will wait for the garbage collector to delete the pods
Feb  7 13:13:14.923: INFO: Deleting ReplicationController wrapped-volume-race-ff0971b8-1798-4973-83e0-955aadd9fc7c took: 25.542567ms
Feb  7 13:13:15.224: INFO: Terminating ReplicationController wrapped-volume-race-ff0971b8-1798-4973-83e0-955aadd9fc7c pods took: 300.487639ms
STEP: Creating RC which spawns configmap-volume pods
Feb  7 13:13:59.583: INFO: Pod name wrapped-volume-race-bf1779e1-e8f5-4262-b546-b0a9829d1714: Found 0 pods out of 5
Feb  7 13:14:04.610: INFO: Pod name wrapped-volume-race-bf1779e1-e8f5-4262-b546-b0a9829d1714: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bf1779e1-e8f5-4262-b546-b0a9829d1714 in namespace emptydir-wrapper-9809, will wait for the garbage collector to delete the pods
Feb  7 13:14:40.792: INFO: Deleting ReplicationController wrapped-volume-race-bf1779e1-e8f5-4262-b546-b0a9829d1714 took: 23.184359ms
Feb  7 13:14:41.192: INFO: Terminating ReplicationController wrapped-volume-race-bf1779e1-e8f5-4262-b546-b0a9829d1714 pods took: 400.529564ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:15:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9809" for this suite.
Feb  7 13:15:38.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:15:38.955: INFO: namespace emptydir-wrapper-9809 deletion completed in 10.162320421s

• [SLOW TEST:278.654 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:15:38.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b33b5363-c1ee-4556-8075-878a3b41d78a in namespace container-probe-667
Feb  7 13:15:51.137: INFO: Started pod liveness-b33b5363-c1ee-4556-8075-878a3b41d78a in namespace container-probe-667
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 13:15:51.141: INFO: Initial restart count of pod liveness-b33b5363-c1ee-4556-8075-878a3b41d78a is 0
Feb  7 13:16:09.248: INFO: Restart count of pod container-probe-667/liveness-b33b5363-c1ee-4556-8075-878a3b41d78a is now 1 (18.107380151s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:16:09.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-667" for this suite.
Feb  7 13:16:15.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:16:15.676: INFO: namespace container-probe-667 deletion completed in 6.387225628s

• [SLOW TEST:36.721 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:16:15.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:16:15.887: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 13:16:16.018: INFO: Number of nodes with available pods: 0
Feb  7 13:16:16.018: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:17.225: INFO: Number of nodes with available pods: 0
Feb  7 13:16:17.225: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:18.350: INFO: Number of nodes with available pods: 0
Feb  7 13:16:18.351: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:19.037: INFO: Number of nodes with available pods: 0
Feb  7 13:16:19.037: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:20.028: INFO: Number of nodes with available pods: 0
Feb  7 13:16:20.028: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:21.816: INFO: Number of nodes with available pods: 0
Feb  7 13:16:21.816: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:22.180: INFO: Number of nodes with available pods: 0
Feb  7 13:16:22.181: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:23.043: INFO: Number of nodes with available pods: 0
Feb  7 13:16:23.043: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:24.046: INFO: Number of nodes with available pods: 0
Feb  7 13:16:24.046: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:25.039: INFO: Number of nodes with available pods: 0
Feb  7 13:16:25.039: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:16:26.027: INFO: Number of nodes with available pods: 2
Feb  7 13:16:26.027: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  7 13:16:26.084: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:26.084: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:27.110: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:27.110: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:28.107: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:28.107: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:29.109: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:29.109: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:30.108: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:30.109: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:31.112: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:31.113: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:32.108: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:32.108: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:33.172: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:33.172: INFO: Pod daemon-set-76gtq is not available
Feb  7 13:16:33.172: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:34.155: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:34.155: INFO: Pod daemon-set-76gtq is not available
Feb  7 13:16:34.155: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:35.113: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:35.113: INFO: Pod daemon-set-76gtq is not available
Feb  7 13:16:35.113: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:36.112: INFO: Wrong image for pod: daemon-set-76gtq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:36.112: INFO: Pod daemon-set-76gtq is not available
Feb  7 13:16:36.112: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:37.112: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:37.112: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:38.112: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:38.112: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:39.110: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:39.110: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:40.111: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:40.111: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:41.118: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:41.118: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:42.124: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:42.124: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:43.107: INFO: Pod daemon-set-qnqr8 is not available
Feb  7 13:16:43.107: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:44.588: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:45.117: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:46.114: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:47.112: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:48.105: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:49.106: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:49.106: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:50.108: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:50.108: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:51.108: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:51.109: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:52.108: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:52.108: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:53.106: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:53.106: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:54.107: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:54.107: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:55.108: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:55.108: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:56.113: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:56.113: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:57.112: INFO: Wrong image for pod: daemon-set-sbkdp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 13:16:57.112: INFO: Pod daemon-set-sbkdp is not available
Feb  7 13:16:58.134: INFO: Pod daemon-set-fbkzw is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  7 13:16:58.143: INFO: Number of nodes with available pods: 1
Feb  7 13:16:58.143: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:16:59.331: INFO: Number of nodes with available pods: 1
Feb  7 13:16:59.331: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:00.157: INFO: Number of nodes with available pods: 1
Feb  7 13:17:00.157: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:01.725: INFO: Number of nodes with available pods: 1
Feb  7 13:17:01.725: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:02.158: INFO: Number of nodes with available pods: 1
Feb  7 13:17:02.158: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:03.162: INFO: Number of nodes with available pods: 1
Feb  7 13:17:03.162: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:04.157: INFO: Number of nodes with available pods: 1
Feb  7 13:17:04.158: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:05.222: INFO: Number of nodes with available pods: 1
Feb  7 13:17:05.222: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:06.167: INFO: Number of nodes with available pods: 1
Feb  7 13:17:06.167: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  7 13:17:07.153: INFO: Number of nodes with available pods: 2
Feb  7 13:17:07.153: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9088, will wait for the garbage collector to delete the pods
Feb  7 13:17:07.234: INFO: Deleting DaemonSet.extensions daemon-set took: 11.498207ms
Feb  7 13:17:07.535: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.33252ms
Feb  7 13:17:17.840: INFO: Number of nodes with available pods: 0
Feb  7 13:17:17.840: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 13:17:17.844: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9088/daemonsets","resourceVersion":"23443752"},"items":null}

Feb  7 13:17:17.849: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9088/pods","resourceVersion":"23443752"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:17:17.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9088" for this suite.
Feb  7 13:17:23.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:17:24.017: INFO: namespace daemonsets-9088 deletion completed in 6.130675601s

• [SLOW TEST:68.339 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:17:24.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-b8b0c028-4874-480c-a378-d07688efdc80
STEP: Creating configMap with name cm-test-opt-upd-75ff6f2b-3738-475f-8515-d305c62b8fb9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b8b0c028-4874-480c-a378-d07688efdc80
STEP: Updating configmap cm-test-opt-upd-75ff6f2b-3738-475f-8515-d305c62b8fb9
STEP: Creating configMap with name cm-test-opt-create-a02cd8bf-1ae8-4a36-b32f-1d4559ca2aaf
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:19:02.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3370" for this suite.
Feb  7 13:19:26.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:19:26.270: INFO: namespace projected-3370 deletion completed in 24.182448045s

• [SLOW TEST:122.253 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:19:26.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:19:26.417: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  7 13:19:26.430: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  7 13:19:31.446: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 13:19:35.459: INFO: Creating deployment "test-rolling-update-deployment"
Feb  7 13:19:35.468: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  7 13:19:35.482: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  7 13:19:37.492: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  7 13:19:37.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:19:39.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:19:41.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:19:43.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716678375, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:19:45.504: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  7 13:19:45.520: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2083,SelfLink:/apis/apps/v1/namespaces/deployment-2083/deployments/test-rolling-update-deployment,UID:aa0fdf66-cbd6-4b06-aff9-599dc1d22b32,ResourceVersion:23444068,Generation:1,CreationTimestamp:2020-02-07 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-07 13:19:35 +0000 UTC 2020-02-07 13:19:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-07 13:19:43 +0000 UTC 2020-02-07 13:19:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  7 13:19:45.525: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2083,SelfLink:/apis/apps/v1/namespaces/deployment-2083/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:420e2b5a-2f61-4f1c-b1ef-064b8c316ab8,ResourceVersion:23444057,Generation:1,CreationTimestamp:2020-02-07 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment aa0fdf66-cbd6-4b06-aff9-599dc1d22b32 0xc0022ea997 0xc0022ea998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  7 13:19:45.525: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  7 13:19:45.525: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2083,SelfLink:/apis/apps/v1/namespaces/deployment-2083/replicasets/test-rolling-update-controller,UID:32a585ec-4510-4830-9d98-371ff1e5f7bc,ResourceVersion:23444066,Generation:2,CreationTimestamp:2020-02-07 13:19:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment aa0fdf66-cbd6-4b06-aff9-599dc1d22b32 0xc0022ea8af 0xc0022ea8c0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 13:19:45.530: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-xt2qw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-xt2qw,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2083,SelfLink:/api/v1/namespaces/deployment-2083/pods/test-rolling-update-deployment-79f6b9d75c-xt2qw,UID:69be2c69-17af-433b-a5f0-364155e4a3fa,ResourceVersion:23444056,Generation:0,CreationTimestamp:2020-02-07 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 420e2b5a-2f61-4f1c-b1ef-064b8c316ab8 0xc0022eb297 0xc0022eb298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dpc6v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dpc6v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dpc6v true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022eb310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022eb330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:19:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:19:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:19:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:19:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-07 13:19:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-07 13:19:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a76d2a15a24b2d92d674f8e965d9ac457ec20a4b816872a7d955ad2b6fabf201}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:19:45.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2083" for this suite.
Feb  7 13:19:51.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:19:51.791: INFO: namespace deployment-2083 deletion completed in 6.256567583s

• [SLOW TEST:25.521 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:19:51.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  7 13:19:51.941: INFO: Waiting up to 5m0s for pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c" in namespace "emptydir-3989" to be "success or failure"
Feb  7 13:19:51.962: INFO: Pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.612097ms
Feb  7 13:19:53.987: INFO: Pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045298732s
Feb  7 13:19:55.998: INFO: Pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056782861s
Feb  7 13:19:58.006: INFO: Pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064827929s
Feb  7 13:20:00.015: INFO: Pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073885122s
STEP: Saw pod success
Feb  7 13:20:00.015: INFO: Pod "pod-821dceed-bbc5-44e9-b78d-d313435bb77c" satisfied condition "success or failure"
Feb  7 13:20:00.053: INFO: Trying to get logs from node iruya-node pod pod-821dceed-bbc5-44e9-b78d-d313435bb77c container test-container: 
STEP: delete the pod
Feb  7 13:20:00.138: INFO: Waiting for pod pod-821dceed-bbc5-44e9-b78d-d313435bb77c to disappear
Feb  7 13:20:00.141: INFO: Pod pod-821dceed-bbc5-44e9-b78d-d313435bb77c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:20:00.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3989" for this suite.
Feb  7 13:20:06.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:20:06.443: INFO: namespace emptydir-3989 deletion completed in 6.295511929s

• [SLOW TEST:14.651 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:20:06.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1720
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 13:20:06.598: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 13:20:38.834: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1720 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:20:38.834: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:20:38.932459       8 log.go:172] (0xc000413a20) (0xc000bbca00) Create stream
I0207 13:20:38.932493       8 log.go:172] (0xc000413a20) (0xc000bbca00) Stream added, broadcasting: 1
I0207 13:20:38.944233       8 log.go:172] (0xc000413a20) Reply frame received for 1
I0207 13:20:38.944336       8 log.go:172] (0xc000413a20) (0xc0022c0000) Create stream
I0207 13:20:38.944359       8 log.go:172] (0xc000413a20) (0xc0022c0000) Stream added, broadcasting: 3
I0207 13:20:38.946829       8 log.go:172] (0xc000413a20) Reply frame received for 3
I0207 13:20:38.946871       8 log.go:172] (0xc000413a20) (0xc000a79180) Create stream
I0207 13:20:38.946886       8 log.go:172] (0xc000413a20) (0xc000a79180) Stream added, broadcasting: 5
I0207 13:20:38.949462       8 log.go:172] (0xc000413a20) Reply frame received for 5
I0207 13:20:39.126390       8 log.go:172] (0xc000413a20) Data frame received for 3
I0207 13:20:39.126429       8 log.go:172] (0xc0022c0000) (3) Data frame handling
I0207 13:20:39.126441       8 log.go:172] (0xc0022c0000) (3) Data frame sent
I0207 13:20:39.237889       8 log.go:172] (0xc000413a20) Data frame received for 1
I0207 13:20:39.237997       8 log.go:172] (0xc000bbca00) (1) Data frame handling
I0207 13:20:39.238030       8 log.go:172] (0xc000bbca00) (1) Data frame sent
I0207 13:20:39.238063       8 log.go:172] (0xc000413a20) (0xc000bbca00) Stream removed, broadcasting: 1
I0207 13:20:39.238347       8 log.go:172] (0xc000413a20) (0xc0022c0000) Stream removed, broadcasting: 3
I0207 13:20:39.238485       8 log.go:172] (0xc000413a20) (0xc000a79180) Stream removed, broadcasting: 5
I0207 13:20:39.238533       8 log.go:172] (0xc000413a20) Go away received
I0207 13:20:39.238594       8 log.go:172] (0xc000413a20) (0xc000bbca00) Stream removed, broadcasting: 1
I0207 13:20:39.238613       8 log.go:172] (0xc000413a20) (0xc0022c0000) Stream removed, broadcasting: 3
I0207 13:20:39.238621       8 log.go:172] (0xc000413a20) (0xc000a79180) Stream removed, broadcasting: 5
Feb  7 13:20:39.238: INFO: Waiting for endpoints: map[]
Feb  7 13:20:39.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1720 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:20:39.256: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:20:39.330972       8 log.go:172] (0xc0005ce4d0) (0xc000112e60) Create stream
I0207 13:20:39.331017       8 log.go:172] (0xc0005ce4d0) (0xc000112e60) Stream added, broadcasting: 1
I0207 13:20:39.336984       8 log.go:172] (0xc0005ce4d0) Reply frame received for 1
I0207 13:20:39.337027       8 log.go:172] (0xc0005ce4d0) (0xc0022c01e0) Create stream
I0207 13:20:39.337043       8 log.go:172] (0xc0005ce4d0) (0xc0022c01e0) Stream added, broadcasting: 3
I0207 13:20:39.339363       8 log.go:172] (0xc0005ce4d0) Reply frame received for 3
I0207 13:20:39.339408       8 log.go:172] (0xc0005ce4d0) (0xc0022c0280) Create stream
I0207 13:20:39.339416       8 log.go:172] (0xc0005ce4d0) (0xc0022c0280) Stream added, broadcasting: 5
I0207 13:20:39.341546       8 log.go:172] (0xc0005ce4d0) Reply frame received for 5
I0207 13:20:39.465733       8 log.go:172] (0xc0005ce4d0) Data frame received for 3
I0207 13:20:39.465775       8 log.go:172] (0xc0022c01e0) (3) Data frame handling
I0207 13:20:39.465794       8 log.go:172] (0xc0022c01e0) (3) Data frame sent
I0207 13:20:39.599464       8 log.go:172] (0xc0005ce4d0) (0xc0022c01e0) Stream removed, broadcasting: 3
I0207 13:20:39.599552       8 log.go:172] (0xc0005ce4d0) Data frame received for 1
I0207 13:20:39.599578       8 log.go:172] (0xc000112e60) (1) Data frame handling
I0207 13:20:39.599603       8 log.go:172] (0xc000112e60) (1) Data frame sent
I0207 13:20:39.599699       8 log.go:172] (0xc0005ce4d0) (0xc000112e60) Stream removed, broadcasting: 1
I0207 13:20:39.599782       8 log.go:172] (0xc0005ce4d0) (0xc0022c0280) Stream removed, broadcasting: 5
I0207 13:20:39.599819       8 log.go:172] (0xc0005ce4d0) (0xc000112e60) Stream removed, broadcasting: 1
I0207 13:20:39.599829       8 log.go:172] (0xc0005ce4d0) (0xc0022c01e0) Stream removed, broadcasting: 3
I0207 13:20:39.599841       8 log.go:172] (0xc0005ce4d0) (0xc0022c0280) Stream removed, broadcasting: 5
I0207 13:20:39.600125       8 log.go:172] (0xc0005ce4d0) Go away received
Feb  7 13:20:39.600: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:20:39.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1720" for this suite.
Feb  7 13:20:51.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:20:51.789: INFO: namespace pod-network-test-1720 deletion completed in 12.174519007s

• [SLOW TEST:45.346 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:20:51.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-dd68fb45-def2-4fe5-a3ce-f0e7236d076f
STEP: Creating a pod to test consume configMaps
Feb  7 13:20:51.978: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691" in namespace "projected-5066" to be "success or failure"
Feb  7 13:20:51.988: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03613ms
Feb  7 13:20:54.004: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02560246s
Feb  7 13:20:56.017: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038954179s
Feb  7 13:20:58.024: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04571476s
Feb  7 13:21:00.037: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058073094s
Feb  7 13:21:02.048: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069622609s
STEP: Saw pod success
Feb  7 13:21:02.048: INFO: Pod "pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691" satisfied condition "success or failure"
Feb  7 13:21:02.053: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 13:21:02.134: INFO: Waiting for pod pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691 to disappear
Feb  7 13:21:02.194: INFO: Pod pod-projected-configmaps-2a9cf977-51e1-4e72-b40e-87c790cdc691 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:21:02.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5066" for this suite.
Feb  7 13:21:08.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:21:08.374: INFO: namespace projected-5066 deletion completed in 6.171708963s

• [SLOW TEST:16.584 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:21:08.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  7 13:21:08.614: INFO: Waiting up to 5m0s for pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639" in namespace "emptydir-5111" to be "success or failure"
Feb  7 13:21:08.627: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639": Phase="Pending", Reason="", readiness=false. Elapsed: 12.805771ms
Feb  7 13:21:10.634: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019908417s
Feb  7 13:21:12.639: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024893374s
Feb  7 13:21:14.655: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040076827s
Feb  7 13:21:16.665: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050487763s
Feb  7 13:21:18.673: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058916576s
STEP: Saw pod success
Feb  7 13:21:18.674: INFO: Pod "pod-19aace10-5206-48b3-8fa3-b25d2b381639" satisfied condition "success or failure"
Feb  7 13:21:18.681: INFO: Trying to get logs from node iruya-node pod pod-19aace10-5206-48b3-8fa3-b25d2b381639 container test-container: 
STEP: delete the pod
Feb  7 13:21:18.742: INFO: Waiting for pod pod-19aace10-5206-48b3-8fa3-b25d2b381639 to disappear
Feb  7 13:21:18.827: INFO: Pod pod-19aace10-5206-48b3-8fa3-b25d2b381639 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:21:18.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5111" for this suite.
Feb  7 13:21:24.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:21:25.051: INFO: namespace emptydir-5111 deletion completed in 6.207898953s

• [SLOW TEST:16.677 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:21:25.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb  7 13:21:33.237: INFO: Pod pod-hostip-9f7a0ad7-9367-4c9f-8e62-ab6b8f0f8ea8 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:21:33.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1045" for this suite.
Feb  7 13:21:55.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:21:55.408: INFO: namespace pods-1045 deletion completed in 22.164008849s

• [SLOW TEST:30.357 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:21:55.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:21:55.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345" in namespace "downward-api-5137" to be "success or failure"
Feb  7 13:21:55.574: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345": Phase="Pending", Reason="", readiness=false. Elapsed: 79.934963ms
Feb  7 13:21:57.587: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092724249s
Feb  7 13:21:59.594: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100356454s
Feb  7 13:22:01.613: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118850026s
Feb  7 13:22:03.632: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138236323s
Feb  7 13:22:05.641: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147384441s
STEP: Saw pod success
Feb  7 13:22:05.641: INFO: Pod "downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345" satisfied condition "success or failure"
Feb  7 13:22:05.648: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345 container client-container: 
STEP: delete the pod
Feb  7 13:22:05.708: INFO: Waiting for pod downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345 to disappear
Feb  7 13:22:05.713: INFO: Pod downwardapi-volume-00d409bd-a7d8-434e-8a25-5f00974ec345 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:22:05.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5137" for this suite.
Feb  7 13:22:11.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:22:11.917: INFO: namespace downward-api-5137 deletion completed in 6.198386685s

• [SLOW TEST:16.509 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:22:11.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb  7 13:22:12.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  7 13:22:12.280: INFO: stderr: ""
Feb  7 13:22:12.280: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:22:12.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8868" for this suite.
Feb  7 13:22:18.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:22:18.515: INFO: namespace kubectl-8868 deletion completed in 6.22980226s

• [SLOW TEST:6.597 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:22:18.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:22:18.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb" in namespace "downward-api-4138" to be "success or failure"
Feb  7 13:22:18.704: INFO: Pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.336599ms
Feb  7 13:22:20.711: INFO: Pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027386523s
Feb  7 13:22:22.741: INFO: Pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057418549s
Feb  7 13:22:24.750: INFO: Pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065874744s
Feb  7 13:22:26.773: INFO: Pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088819519s
STEP: Saw pod success
Feb  7 13:22:26.773: INFO: Pod "downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb" satisfied condition "success or failure"
Feb  7 13:22:26.779: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb container client-container: 
STEP: delete the pod
Feb  7 13:22:26.924: INFO: Waiting for pod downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb to disappear
Feb  7 13:22:26.934: INFO: Pod downwardapi-volume-717fb039-a324-4aa7-af21-dc22b2d618cb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:22:26.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4138" for this suite.
Feb  7 13:22:32.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:22:33.126: INFO: namespace downward-api-4138 deletion completed in 6.187408025s

• [SLOW TEST:14.610 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:22:33.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:22:41.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3368" for this suite.
Feb  7 13:23:33.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:23:33.406: INFO: namespace kubelet-test-3368 deletion completed in 52.110974529s

• [SLOW TEST:60.281 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:23:33.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a in namespace container-probe-1469
Feb  7 13:23:43.536: INFO: Started pod liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a in namespace container-probe-1469
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 13:23:43.540: INFO: Initial restart count of pod liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a is 0
Feb  7 13:24:01.618: INFO: Restart count of pod container-probe-1469/liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a is now 1 (18.077629066s elapsed)
Feb  7 13:24:21.783: INFO: Restart count of pod container-probe-1469/liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a is now 2 (38.242754581s elapsed)
Feb  7 13:24:43.959: INFO: Restart count of pod container-probe-1469/liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a is now 3 (1m0.419233604s elapsed)
Feb  7 13:25:02.470: INFO: Restart count of pod container-probe-1469/liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a is now 4 (1m18.929689097s elapsed)
Feb  7 13:26:14.838: INFO: Restart count of pod container-probe-1469/liveness-e9a22e9f-9ca9-443c-a7c5-dae9e1b5b98a is now 5 (2m31.298444511s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:26:14.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1469" for this suite.
Feb  7 13:26:20.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:26:21.069: INFO: namespace container-probe-1469 deletion completed in 6.149038032s

• [SLOW TEST:167.662 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:26:21.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  7 13:26:31.712: INFO: Successfully updated pod "annotationupdate820eafa9-1239-4c0d-a392-dc904ab8b16b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:26:33.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-744" for this suite.
Feb  7 13:26:55.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:26:55.976: INFO: namespace projected-744 deletion completed in 22.167518397s

• [SLOW TEST:34.907 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:26:55.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:26:56.077: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  7 13:27:01.084: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 13:27:03.156: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  7 13:27:03.191: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-601,SelfLink:/apis/apps/v1/namespaces/deployment-601/deployments/test-cleanup-deployment,UID:cc3e5aa1-f4c1-4fe6-8ccf-67cae5d086da,ResourceVersion:23445011,Generation:1,CreationTimestamp:2020-02-07 13:27:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  7 13:27:03.196: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb  7 13:27:03.196: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  7 13:27:03.196: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-601,SelfLink:/apis/apps/v1/namespaces/deployment-601/replicasets/test-cleanup-controller,UID:99c9165d-c340-4f71-a605-5de9a527c2b1,ResourceVersion:23445012,Generation:1,CreationTimestamp:2020-02-07 13:26:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment cc3e5aa1-f4c1-4fe6-8ccf-67cae5d086da 0xc001db3677 0xc001db3678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  7 13:27:03.229: INFO: Pod "test-cleanup-controller-rl628" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rl628,GenerateName:test-cleanup-controller-,Namespace:deployment-601,SelfLink:/api/v1/namespaces/deployment-601/pods/test-cleanup-controller-rl628,UID:824b9199-e7e6-4ff9-84dc-b2e8e85ee839,ResourceVersion:23445008,Generation:0,CreationTimestamp:2020-02-07 13:26:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 99c9165d-c340-4f71-a605-5de9a527c2b1 0xc001db3dd7 0xc001db3dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-75zg8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75zg8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-75zg8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001db3e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001db3e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:26:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:27:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:27:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:26:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-07 13:26:56 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 13:27:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1f4a0f2c799b0a6d5f5ca6bd0bb8d170c0419165e8e0a4f2784e4a2afdcb01d5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:27:03.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-601" for this suite.
Feb  7 13:27:09.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:27:09.447: INFO: namespace deployment-601 deletion completed in 6.191815838s

• [SLOW TEST:13.471 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:27:09.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  7 13:27:09.613: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2413" to be "success or failure"
Feb  7 13:27:09.625: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.777059ms
Feb  7 13:27:11.631: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018040851s
Feb  7 13:27:13.643: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029858727s
Feb  7 13:27:15.654: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041038312s
Feb  7 13:27:17.664: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050642154s
Feb  7 13:27:19.677: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063501392s
Feb  7 13:27:22.477: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.863336282s
Feb  7 13:27:24.493: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.879359054s
Feb  7 13:27:26.507: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.893435281s
STEP: Saw pod success
Feb  7 13:27:26.507: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  7 13:27:26.521: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  7 13:27:26.586: INFO: Waiting for pod pod-host-path-test to disappear
Feb  7 13:27:26.591: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:27:26.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2413" for this suite.
Feb  7 13:27:32.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:27:32.784: INFO: namespace hostpath-2413 deletion completed in 6.164602338s

• [SLOW TEST:23.336 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:27:32.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5827
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5827 to expose endpoints map[]
Feb  7 13:27:32.949: INFO: Get endpoints failed (14.914874ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  7 13:27:33.958: INFO: successfully validated that service multi-endpoint-test in namespace services-5827 exposes endpoints map[] (1.024017406s elapsed)
STEP: Creating pod pod1 in namespace services-5827
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5827 to expose endpoints map[pod1:[100]]
Feb  7 13:27:38.062: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.074545077s elapsed, will retry)
Feb  7 13:27:42.107: INFO: successfully validated that service multi-endpoint-test in namespace services-5827 exposes endpoints map[pod1:[100]] (8.119540059s elapsed)
STEP: Creating pod pod2 in namespace services-5827
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5827 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  7 13:27:46.575: INFO: Unexpected endpoints: found map[0f3396c6-1c9e-4a0c-8d30-6fba9aef8775:[100]], expected map[pod1:[100] pod2:[101]] (4.464437751s elapsed, will retry)
Feb  7 13:27:49.643: INFO: successfully validated that service multi-endpoint-test in namespace services-5827 exposes endpoints map[pod1:[100] pod2:[101]] (7.532286781s elapsed)
STEP: Deleting pod pod1 in namespace services-5827
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5827 to expose endpoints map[pod2:[101]]
Feb  7 13:27:50.741: INFO: successfully validated that service multi-endpoint-test in namespace services-5827 exposes endpoints map[pod2:[101]] (1.08187706s elapsed)
STEP: Deleting pod pod2 in namespace services-5827
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5827 to expose endpoints map[]
Feb  7 13:27:51.824: INFO: successfully validated that service multi-endpoint-test in namespace services-5827 exposes endpoints map[] (1.072660699s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:27:51.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5827" for this suite.
Feb  7 13:27:58.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:27:58.117: INFO: namespace services-5827 deletion completed in 6.166722626s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:25.333 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:27:58.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-2236cab0-b27c-421c-b286-425562258d43
STEP: Creating secret with name s-test-opt-upd-4c7e6b86-2cc6-46d3-8942-135d6502855d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2236cab0-b27c-421c-b286-425562258d43
STEP: Updating secret s-test-opt-upd-4c7e6b86-2cc6-46d3-8942-135d6502855d
STEP: Creating secret with name s-test-opt-create-315ae973-ad31-4e8b-b7a7-ece719de8f4e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:29:30.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4548" for this suite.
Feb  7 13:29:52.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:29:52.353: INFO: namespace projected-4548 deletion completed in 22.186130264s

• [SLOW TEST:114.236 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:29:52.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-cbb5d373-56ca-4bb2-9896-8f605a54a3ef
STEP: Creating a pod to test consume configMaps
Feb  7 13:29:52.487: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566" in namespace "projected-8931" to be "success or failure"
Feb  7 13:29:52.496: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488642ms
Feb  7 13:29:54.506: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018456563s
Feb  7 13:29:56.516: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028737723s
Feb  7 13:29:58.526: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038926739s
Feb  7 13:30:00.539: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052044288s
Feb  7 13:30:02.551: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063696625s
STEP: Saw pod success
Feb  7 13:30:02.551: INFO: Pod "pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566" satisfied condition "success or failure"
Feb  7 13:30:02.555: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 13:30:03.096: INFO: Waiting for pod pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566 to disappear
Feb  7 13:30:03.113: INFO: Pod pod-projected-configmaps-ac07156e-374f-448b-960d-13ee90c9a566 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:30:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8931" for this suite.
Feb  7 13:30:09.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:30:09.340: INFO: namespace projected-8931 deletion completed in 6.216202469s

• [SLOW TEST:16.986 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:30:09.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  7 13:30:09.382: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:30:26.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4938" for this suite.
Feb  7 13:30:48.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:30:49.139: INFO: namespace init-container-4938 deletion completed in 22.263567359s

• [SLOW TEST:39.798 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:30:49.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0207 13:31:00.348523       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 13:31:00.348: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:31:00.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2744" for this suite.
Feb  7 13:31:16.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:31:16.822: INFO: namespace gc-2744 deletion completed in 16.465711446s

• [SLOW TEST:27.683 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:31:16.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 13:31:17.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8001'
Feb  7 13:31:19.754: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 13:31:19.754: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  7 13:31:21.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8001'
Feb  7 13:31:22.000: INFO: stderr: ""
Feb  7 13:31:22.000: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:31:22.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8001" for this suite.
Feb  7 13:31:28.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:31:28.323: INFO: namespace kubectl-8001 deletion completed in 6.31367424s

• [SLOW TEST:11.501 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:31:28.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  7 13:31:44.548: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:44.573: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:46.573: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:46.585: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:48.574: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:48.598: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:50.574: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:50.590: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:52.574: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:52.595: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:54.573: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:54.587: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:56.573: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:56.621: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 13:31:58.574: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 13:31:58.599: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:31:58.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2253" for this suite.
Feb  7 13:32:20.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:32:20.774: INFO: namespace container-lifecycle-hook-2253 deletion completed in 22.161190155s

• [SLOW TEST:52.451 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:32:20.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  7 13:35:21.292: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:21.333: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:23.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:23.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:25.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:25.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:27.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:27.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:29.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:29.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:31.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:31.350: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:33.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:33.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:35.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:35.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:37.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:37.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:39.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:39.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:41.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:41.340: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:43.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:43.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:45.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:45.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:47.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:47.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:49.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:49.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:51.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:51.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:53.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:53.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:55.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:55.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:57.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:57.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:35:59.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:35:59.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:01.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:01.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:03.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:03.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:05.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:05.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:07.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:07.346: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:09.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:09.347: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:11.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:11.345: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:13.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:13.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:15.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:15.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:17.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:17.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:19.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:19.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:21.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:21.348: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:23.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:23.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:25.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:25.356: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:27.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:27.345: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:29.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:29.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:31.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:31.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:33.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:33.344: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:35.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:35.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:37.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:37.342: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:39.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:39.341: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:41.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:41.339: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:43.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:43.339: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:45.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:45.340: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 13:36:47.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 13:36:47.343: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:36:47.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6803" for this suite.
Feb  7 13:37:09.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:37:09.473: INFO: namespace container-lifecycle-hook-6803 deletion completed in 22.122388678s

• [SLOW TEST:288.698 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:37:09.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-109
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 13:37:09.533: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 13:37:43.878: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-109 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:37:43.878: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:37:44.005603       8 log.go:172] (0xc0019c8210) (0xc000bbde00) Create stream
I0207 13:37:44.005787       8 log.go:172] (0xc0019c8210) (0xc000bbde00) Stream added, broadcasting: 1
I0207 13:37:44.022412       8 log.go:172] (0xc0019c8210) Reply frame received for 1
I0207 13:37:44.022450       8 log.go:172] (0xc0019c8210) (0xc0018ac320) Create stream
I0207 13:37:44.022462       8 log.go:172] (0xc0019c8210) (0xc0018ac320) Stream added, broadcasting: 3
I0207 13:37:44.024992       8 log.go:172] (0xc0019c8210) Reply frame received for 3
I0207 13:37:44.025216       8 log.go:172] (0xc0019c8210) (0xc00054e000) Create stream
I0207 13:37:44.025274       8 log.go:172] (0xc0019c8210) (0xc00054e000) Stream added, broadcasting: 5
I0207 13:37:44.027980       8 log.go:172] (0xc0019c8210) Reply frame received for 5
I0207 13:37:45.198841       8 log.go:172] (0xc0019c8210) Data frame received for 3
I0207 13:37:45.198905       8 log.go:172] (0xc0018ac320) (3) Data frame handling
I0207 13:37:45.198940       8 log.go:172] (0xc0018ac320) (3) Data frame sent
I0207 13:37:45.477218       8 log.go:172] (0xc0019c8210) Data frame received for 1
I0207 13:37:45.477476       8 log.go:172] (0xc0019c8210) (0xc0018ac320) Stream removed, broadcasting: 3
I0207 13:37:45.477590       8 log.go:172] (0xc000bbde00) (1) Data frame handling
I0207 13:37:45.477683       8 log.go:172] (0xc000bbde00) (1) Data frame sent
I0207 13:37:45.477721       8 log.go:172] (0xc0019c8210) (0xc00054e000) Stream removed, broadcasting: 5
I0207 13:37:45.477808       8 log.go:172] (0xc0019c8210) (0xc000bbde00) Stream removed, broadcasting: 1
I0207 13:37:45.477824       8 log.go:172] (0xc0019c8210) Go away received
I0207 13:37:45.478068       8 log.go:172] (0xc0019c8210) (0xc000bbde00) Stream removed, broadcasting: 1
I0207 13:37:45.478097       8 log.go:172] (0xc0019c8210) (0xc0018ac320) Stream removed, broadcasting: 3
I0207 13:37:45.478136       8 log.go:172] (0xc0019c8210) (0xc00054e000) Stream removed, broadcasting: 5
Feb  7 13:37:45.478: INFO: Found all expected endpoints: [netserver-0]
Feb  7 13:37:45.486: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-109 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:37:45.486: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:37:45.548472       8 log.go:172] (0xc000f2e420) (0xc000113720) Create stream
I0207 13:37:45.548515       8 log.go:172] (0xc000f2e420) (0xc000113720) Stream added, broadcasting: 1
I0207 13:37:45.557888       8 log.go:172] (0xc000f2e420) Reply frame received for 1
I0207 13:37:45.557917       8 log.go:172] (0xc000f2e420) (0xc000113e00) Create stream
I0207 13:37:45.557926       8 log.go:172] (0xc000f2e420) (0xc000113e00) Stream added, broadcasting: 3
I0207 13:37:45.559720       8 log.go:172] (0xc000f2e420) Reply frame received for 3
I0207 13:37:45.559750       8 log.go:172] (0xc000f2e420) (0xc0018ac6e0) Create stream
I0207 13:37:45.559799       8 log.go:172] (0xc000f2e420) (0xc0018ac6e0) Stream added, broadcasting: 5
I0207 13:37:45.564470       8 log.go:172] (0xc000f2e420) Reply frame received for 5
I0207 13:37:46.673347       8 log.go:172] (0xc000f2e420) Data frame received for 3
I0207 13:37:46.673420       8 log.go:172] (0xc000113e00) (3) Data frame handling
I0207 13:37:46.673464       8 log.go:172] (0xc000113e00) (3) Data frame sent
I0207 13:37:46.855450       8 log.go:172] (0xc000f2e420) (0xc000113e00) Stream removed, broadcasting: 3
I0207 13:37:46.855571       8 log.go:172] (0xc000f2e420) Data frame received for 1
I0207 13:37:46.855585       8 log.go:172] (0xc000113720) (1) Data frame handling
I0207 13:37:46.855605       8 log.go:172] (0xc000113720) (1) Data frame sent
I0207 13:37:46.855743       8 log.go:172] (0xc000f2e420) (0xc0018ac6e0) Stream removed, broadcasting: 5
I0207 13:37:46.855781       8 log.go:172] (0xc000f2e420) (0xc000113720) Stream removed, broadcasting: 1
I0207 13:37:46.855808       8 log.go:172] (0xc000f2e420) Go away received
I0207 13:37:46.856099       8 log.go:172] (0xc000f2e420) (0xc000113720) Stream removed, broadcasting: 1
I0207 13:37:46.856147       8 log.go:172] (0xc000f2e420) (0xc000113e00) Stream removed, broadcasting: 3
I0207 13:37:46.856168       8 log.go:172] (0xc000f2e420) (0xc0018ac6e0) Stream removed, broadcasting: 5
Feb  7 13:37:46.856: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:37:46.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-109" for this suite.
Feb  7 13:38:09.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:38:09.370: INFO: namespace pod-network-test-109 deletion completed in 22.140309367s

• [SLOW TEST:59.896 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:38:09.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-320ce167-73b1-40b5-87a3-ad8d591f6713
STEP: Creating a pod to test consume configMaps
Feb  7 13:38:09.491: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817" in namespace "projected-1025" to be "success or failure"
Feb  7 13:38:09.507: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817": Phase="Pending", Reason="", readiness=false. Elapsed: 16.596682ms
Feb  7 13:38:11.514: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023281554s
Feb  7 13:38:13.522: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031452729s
Feb  7 13:38:15.529: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037973406s
Feb  7 13:38:17.541: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050785381s
Feb  7 13:38:19.550: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059101225s
STEP: Saw pod success
Feb  7 13:38:19.550: INFO: Pod "pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817" satisfied condition "success or failure"
Feb  7 13:38:19.554: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 13:38:19.618: INFO: Waiting for pod pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817 to disappear
Feb  7 13:38:19.631: INFO: Pod pod-projected-configmaps-7b020187-09ef-4f08-b6cc-fe62be064817 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:38:19.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1025" for this suite.
Feb  7 13:38:25.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:38:25.904: INFO: namespace projected-1025 deletion completed in 6.25909441s

• [SLOW TEST:16.534 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:38:25.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:38:26.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8" in namespace "projected-3459" to be "success or failure"
Feb  7 13:38:26.103: INFO: Pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8": Phase="Pending", Reason="", readiness=false. Elapsed: 60.497496ms
Feb  7 13:38:28.150: INFO: Pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107523296s
Feb  7 13:38:30.161: INFO: Pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118401459s
Feb  7 13:38:32.172: INFO: Pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12923403s
Feb  7 13:38:34.177: INFO: Pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134663268s
STEP: Saw pod success
Feb  7 13:38:34.177: INFO: Pod "downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8" satisfied condition "success or failure"
Feb  7 13:38:34.179: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8 container client-container: 
STEP: delete the pod
Feb  7 13:38:34.260: INFO: Waiting for pod downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8 to disappear
Feb  7 13:38:34.297: INFO: Pod downwardapi-volume-8ee2c945-d1f4-43d5-a8e8-9457d17173d8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:38:34.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3459" for this suite.
Feb  7 13:38:40.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:38:40.581: INFO: namespace projected-3459 deletion completed in 6.242921829s

• [SLOW TEST:14.678 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:38:40.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  7 13:38:40.665: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 13:38:40.712: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 13:38:40.756: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  7 13:38:40.767: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  7 13:38:40.767: INFO: 	Container weave ready: true, restart count 0
Feb  7 13:38:40.768: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 13:38:40.768: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.768: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 13:38:40.768: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  7 13:38:40.782: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  7 13:38:40.782: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container coredns ready: true, restart count 0
Feb  7 13:38:40.782: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container etcd ready: true, restart count 0
Feb  7 13:38:40.782: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container weave ready: true, restart count 0
Feb  7 13:38:40.782: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 13:38:40.782: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container coredns ready: true, restart count 0
Feb  7 13:38:40.782: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  7 13:38:40.782: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 13:38:40.782: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  7 13:38:40.782: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f1225b8de5743d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:38:41.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9645" for this suite.
Feb  7 13:38:47.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:38:47.955: INFO: namespace sched-pred-9645 deletion completed in 6.131642581s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.373 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:38:47.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  7 13:38:54.393: INFO: 0 pods remaining
Feb  7 13:38:54.393: INFO: 0 pods has nil DeletionTimestamp
Feb  7 13:38:54.393: INFO: 
STEP: Gathering metrics
W0207 13:38:55.448916       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 13:38:55.448: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:38:55.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7690" for this suite.
Feb  7 13:39:05.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:39:05.910: INFO: namespace gc-7690 deletion completed in 10.457697343s

• [SLOW TEST:17.955 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:39:05.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb  7 13:39:06.065: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:39:06.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9699" for this suite.
Feb  7 13:39:12.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:39:12.312: INFO: namespace kubectl-9699 deletion completed in 6.124723456s

• [SLOW TEST:6.402 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:39:12.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  7 13:39:32.495: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:32.495: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:32.573751       8 log.go:172] (0xc002033340) (0xc0010c9cc0) Create stream
I0207 13:39:32.573830       8 log.go:172] (0xc002033340) (0xc0010c9cc0) Stream added, broadcasting: 1
I0207 13:39:32.579475       8 log.go:172] (0xc002033340) Reply frame received for 1
I0207 13:39:32.579500       8 log.go:172] (0xc002033340) (0xc0002388c0) Create stream
I0207 13:39:32.579506       8 log.go:172] (0xc002033340) (0xc0002388c0) Stream added, broadcasting: 3
I0207 13:39:32.581160       8 log.go:172] (0xc002033340) Reply frame received for 3
I0207 13:39:32.581180       8 log.go:172] (0xc002033340) (0xc000a78a00) Create stream
I0207 13:39:32.581189       8 log.go:172] (0xc002033340) (0xc000a78a00) Stream added, broadcasting: 5
I0207 13:39:32.583918       8 log.go:172] (0xc002033340) Reply frame received for 5
I0207 13:39:32.697818       8 log.go:172] (0xc002033340) Data frame received for 3
I0207 13:39:32.697849       8 log.go:172] (0xc0002388c0) (3) Data frame handling
I0207 13:39:32.697862       8 log.go:172] (0xc0002388c0) (3) Data frame sent
I0207 13:39:32.862273       8 log.go:172] (0xc002033340) (0xc0002388c0) Stream removed, broadcasting: 3
I0207 13:39:32.862449       8 log.go:172] (0xc002033340) Data frame received for 1
I0207 13:39:32.862459       8 log.go:172] (0xc0010c9cc0) (1) Data frame handling
I0207 13:39:32.862483       8 log.go:172] (0xc0010c9cc0) (1) Data frame sent
I0207 13:39:32.862536       8 log.go:172] (0xc002033340) (0xc0010c9cc0) Stream removed, broadcasting: 1
I0207 13:39:32.862706       8 log.go:172] (0xc002033340) (0xc000a78a00) Stream removed, broadcasting: 5
I0207 13:39:32.862775       8 log.go:172] (0xc002033340) Go away received
I0207 13:39:32.862942       8 log.go:172] (0xc002033340) (0xc0010c9cc0) Stream removed, broadcasting: 1
I0207 13:39:32.863004       8 log.go:172] (0xc002033340) (0xc0002388c0) Stream removed, broadcasting: 3
I0207 13:39:32.863036       8 log.go:172] (0xc002033340) (0xc000a78a00) Stream removed, broadcasting: 5
Feb  7 13:39:32.863: INFO: Exec stderr: ""
Feb  7 13:39:32.863: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:32.863: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:32.944020       8 log.go:172] (0xc000aa4dc0) (0xc000a78fa0) Create stream
I0207 13:39:32.944286       8 log.go:172] (0xc000aa4dc0) (0xc000a78fa0) Stream added, broadcasting: 1
I0207 13:39:32.966665       8 log.go:172] (0xc000aa4dc0) Reply frame received for 1
I0207 13:39:32.966805       8 log.go:172] (0xc000aa4dc0) (0xc0010c9d60) Create stream
I0207 13:39:32.966840       8 log.go:172] (0xc000aa4dc0) (0xc0010c9d60) Stream added, broadcasting: 3
I0207 13:39:32.977423       8 log.go:172] (0xc000aa4dc0) Reply frame received for 3
I0207 13:39:32.977704       8 log.go:172] (0xc000aa4dc0) (0xc0029820a0) Create stream
I0207 13:39:32.977790       8 log.go:172] (0xc000aa4dc0) (0xc0029820a0) Stream added, broadcasting: 5
I0207 13:39:32.988884       8 log.go:172] (0xc000aa4dc0) Reply frame received for 5
I0207 13:39:33.129314       8 log.go:172] (0xc000aa4dc0) Data frame received for 3
I0207 13:39:33.129358       8 log.go:172] (0xc0010c9d60) (3) Data frame handling
I0207 13:39:33.129386       8 log.go:172] (0xc0010c9d60) (3) Data frame sent
I0207 13:39:33.294675       8 log.go:172] (0xc000aa4dc0) Data frame received for 1
I0207 13:39:33.294710       8 log.go:172] (0xc000a78fa0) (1) Data frame handling
I0207 13:39:33.294731       8 log.go:172] (0xc000a78fa0) (1) Data frame sent
I0207 13:39:33.295172       8 log.go:172] (0xc000aa4dc0) (0xc000a78fa0) Stream removed, broadcasting: 1
I0207 13:39:33.295573       8 log.go:172] (0xc000aa4dc0) (0xc0010c9d60) Stream removed, broadcasting: 3
I0207 13:39:33.296649       8 log.go:172] (0xc000aa4dc0) (0xc0029820a0) Stream removed, broadcasting: 5
I0207 13:39:33.296864       8 log.go:172] (0xc000aa4dc0) (0xc000a78fa0) Stream removed, broadcasting: 1
I0207 13:39:33.296888       8 log.go:172] (0xc000aa4dc0) (0xc0010c9d60) Stream removed, broadcasting: 3
I0207 13:39:33.296895       8 log.go:172] (0xc000aa4dc0) (0xc0029820a0) Stream removed, broadcasting: 5
Feb  7 13:39:33.296: INFO: Exec stderr: ""
Feb  7 13:39:33.296: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:33.296: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:33.297039       8 log.go:172] (0xc000aa4dc0) Go away received
I0207 13:39:33.380346       8 log.go:172] (0xc002430bb0) (0xc002982280) Create stream
I0207 13:39:33.380475       8 log.go:172] (0xc002430bb0) (0xc002982280) Stream added, broadcasting: 1
I0207 13:39:33.390314       8 log.go:172] (0xc002430bb0) Reply frame received for 1
I0207 13:39:33.390346       8 log.go:172] (0xc002430bb0) (0xc00284a500) Create stream
I0207 13:39:33.390354       8 log.go:172] (0xc002430bb0) (0xc00284a500) Stream added, broadcasting: 3
I0207 13:39:33.391808       8 log.go:172] (0xc002430bb0) Reply frame received for 3
I0207 13:39:33.391896       8 log.go:172] (0xc002430bb0) (0xc000238be0) Create stream
I0207 13:39:33.391913       8 log.go:172] (0xc002430bb0) (0xc000238be0) Stream added, broadcasting: 5
I0207 13:39:33.393486       8 log.go:172] (0xc002430bb0) Reply frame received for 5
I0207 13:39:33.507293       8 log.go:172] (0xc002430bb0) Data frame received for 3
I0207 13:39:33.507326       8 log.go:172] (0xc00284a500) (3) Data frame handling
I0207 13:39:33.507350       8 log.go:172] (0xc00284a500) (3) Data frame sent
I0207 13:39:33.681057       8 log.go:172] (0xc002430bb0) Data frame received for 1
I0207 13:39:33.681178       8 log.go:172] (0xc002982280) (1) Data frame handling
I0207 13:39:33.681255       8 log.go:172] (0xc002982280) (1) Data frame sent
I0207 13:39:33.681268       8 log.go:172] (0xc002430bb0) (0xc002982280) Stream removed, broadcasting: 1
I0207 13:39:33.682760       8 log.go:172] (0xc002430bb0) (0xc00284a500) Stream removed, broadcasting: 3
I0207 13:39:33.682797       8 log.go:172] (0xc002430bb0) (0xc000238be0) Stream removed, broadcasting: 5
I0207 13:39:33.682829       8 log.go:172] (0xc002430bb0) (0xc002982280) Stream removed, broadcasting: 1
I0207 13:39:33.682840       8 log.go:172] (0xc002430bb0) (0xc00284a500) Stream removed, broadcasting: 3
I0207 13:39:33.682848       8 log.go:172] (0xc002430bb0) (0xc000238be0) Stream removed, broadcasting: 5
I0207 13:39:33.682888       8 log.go:172] (0xc002430bb0) Go away received
Feb  7 13:39:33.683: INFO: Exec stderr: ""
Feb  7 13:39:33.683: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:33.683: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:33.750370       8 log.go:172] (0xc0026fde40) (0xc000238fa0) Create stream
I0207 13:39:33.750700       8 log.go:172] (0xc0026fde40) (0xc000238fa0) Stream added, broadcasting: 1
I0207 13:39:33.767906       8 log.go:172] (0xc0026fde40) Reply frame received for 1
I0207 13:39:33.768005       8 log.go:172] (0xc0026fde40) (0xc002d5c000) Create stream
I0207 13:39:33.768022       8 log.go:172] (0xc0026fde40) (0xc002d5c000) Stream added, broadcasting: 3
I0207 13:39:33.773115       8 log.go:172] (0xc0026fde40) Reply frame received for 3
I0207 13:39:33.773168       8 log.go:172] (0xc0026fde40) (0xc002982320) Create stream
I0207 13:39:33.773197       8 log.go:172] (0xc0026fde40) (0xc002982320) Stream added, broadcasting: 5
I0207 13:39:33.778260       8 log.go:172] (0xc0026fde40) Reply frame received for 5
I0207 13:39:33.989764       8 log.go:172] (0xc0026fde40) Data frame received for 3
I0207 13:39:33.989816       8 log.go:172] (0xc002d5c000) (3) Data frame handling
I0207 13:39:33.989834       8 log.go:172] (0xc002d5c000) (3) Data frame sent
I0207 13:39:34.206918       8 log.go:172] (0xc0026fde40) (0xc002d5c000) Stream removed, broadcasting: 3
I0207 13:39:34.207104       8 log.go:172] (0xc0026fde40) Data frame received for 1
I0207 13:39:34.207128       8 log.go:172] (0xc0026fde40) (0xc002982320) Stream removed, broadcasting: 5
I0207 13:39:34.207158       8 log.go:172] (0xc000238fa0) (1) Data frame handling
I0207 13:39:34.207170       8 log.go:172] (0xc000238fa0) (1) Data frame sent
I0207 13:39:34.207178       8 log.go:172] (0xc0026fde40) (0xc000238fa0) Stream removed, broadcasting: 1
I0207 13:39:34.207189       8 log.go:172] (0xc0026fde40) Go away received
I0207 13:39:34.207381       8 log.go:172] (0xc0026fde40) (0xc000238fa0) Stream removed, broadcasting: 1
I0207 13:39:34.207395       8 log.go:172] (0xc0026fde40) (0xc002d5c000) Stream removed, broadcasting: 3
I0207 13:39:34.207402       8 log.go:172] (0xc0026fde40) (0xc002982320) Stream removed, broadcasting: 5
Feb  7 13:39:34.207: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  7 13:39:34.207: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:34.207: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:34.276886       8 log.go:172] (0xc002d3c420) (0xc002d5c320) Create stream
I0207 13:39:34.276940       8 log.go:172] (0xc002d3c420) (0xc002d5c320) Stream added, broadcasting: 1
I0207 13:39:34.287333       8 log.go:172] (0xc002d3c420) Reply frame received for 1
I0207 13:39:34.287388       8 log.go:172] (0xc002d3c420) (0xc0002390e0) Create stream
I0207 13:39:34.287397       8 log.go:172] (0xc002d3c420) (0xc0002390e0) Stream added, broadcasting: 3
I0207 13:39:34.289410       8 log.go:172] (0xc002d3c420) Reply frame received for 3
I0207 13:39:34.289458       8 log.go:172] (0xc002d3c420) (0xc002d5c3c0) Create stream
I0207 13:39:34.289472       8 log.go:172] (0xc002d3c420) (0xc002d5c3c0) Stream added, broadcasting: 5
I0207 13:39:34.292682       8 log.go:172] (0xc002d3c420) Reply frame received for 5
I0207 13:39:34.398674       8 log.go:172] (0xc002d3c420) Data frame received for 3
I0207 13:39:34.398699       8 log.go:172] (0xc0002390e0) (3) Data frame handling
I0207 13:39:34.398707       8 log.go:172] (0xc0002390e0) (3) Data frame sent
I0207 13:39:34.554761       8 log.go:172] (0xc002d3c420) (0xc0002390e0) Stream removed, broadcasting: 3
I0207 13:39:34.554879       8 log.go:172] (0xc002d3c420) Data frame received for 1
I0207 13:39:34.554893       8 log.go:172] (0xc002d5c320) (1) Data frame handling
I0207 13:39:34.554912       8 log.go:172] (0xc002d5c320) (1) Data frame sent
I0207 13:39:34.554963       8 log.go:172] (0xc002d3c420) (0xc002d5c320) Stream removed, broadcasting: 1
I0207 13:39:34.555135       8 log.go:172] (0xc002d3c420) (0xc002d5c3c0) Stream removed, broadcasting: 5
I0207 13:39:34.555186       8 log.go:172] (0xc002d3c420) Go away received
I0207 13:39:34.555215       8 log.go:172] (0xc002d3c420) (0xc002d5c320) Stream removed, broadcasting: 1
I0207 13:39:34.555238       8 log.go:172] (0xc002d3c420) (0xc0002390e0) Stream removed, broadcasting: 3
I0207 13:39:34.555267       8 log.go:172] (0xc002d3c420) (0xc002d5c3c0) Stream removed, broadcasting: 5
Feb  7 13:39:34.555: INFO: Exec stderr: ""
Feb  7 13:39:34.555: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:34.555: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:34.617766       8 log.go:172] (0xc002d38c60) (0xc000239400) Create stream
I0207 13:39:34.617821       8 log.go:172] (0xc002d38c60) (0xc000239400) Stream added, broadcasting: 1
I0207 13:39:34.623054       8 log.go:172] (0xc002d38c60) Reply frame received for 1
I0207 13:39:34.623088       8 log.go:172] (0xc002d38c60) (0xc0002394a0) Create stream
I0207 13:39:34.623098       8 log.go:172] (0xc002d38c60) (0xc0002394a0) Stream added, broadcasting: 3
I0207 13:39:34.624828       8 log.go:172] (0xc002d38c60) Reply frame received for 3
I0207 13:39:34.624858       8 log.go:172] (0xc002d38c60) (0xc002d5c5a0) Create stream
I0207 13:39:34.624873       8 log.go:172] (0xc002d38c60) (0xc002d5c5a0) Stream added, broadcasting: 5
I0207 13:39:34.626243       8 log.go:172] (0xc002d38c60) Reply frame received for 5
I0207 13:39:34.688401       8 log.go:172] (0xc002d38c60) Data frame received for 3
I0207 13:39:34.688444       8 log.go:172] (0xc0002394a0) (3) Data frame handling
I0207 13:39:34.688467       8 log.go:172] (0xc0002394a0) (3) Data frame sent
I0207 13:39:34.801567       8 log.go:172] (0xc002d38c60) (0xc0002394a0) Stream removed, broadcasting: 3
I0207 13:39:34.801634       8 log.go:172] (0xc002d38c60) Data frame received for 1
I0207 13:39:34.801652       8 log.go:172] (0xc000239400) (1) Data frame handling
I0207 13:39:34.801664       8 log.go:172] (0xc000239400) (1) Data frame sent
I0207 13:39:34.801681       8 log.go:172] (0xc002d38c60) (0xc002d5c5a0) Stream removed, broadcasting: 5
I0207 13:39:34.801702       8 log.go:172] (0xc002d38c60) (0xc000239400) Stream removed, broadcasting: 1
I0207 13:39:34.801714       8 log.go:172] (0xc002d38c60) Go away received
I0207 13:39:34.801810       8 log.go:172] (0xc002d38c60) (0xc000239400) Stream removed, broadcasting: 1
I0207 13:39:34.801825       8 log.go:172] (0xc002d38c60) (0xc0002394a0) Stream removed, broadcasting: 3
I0207 13:39:34.801844       8 log.go:172] (0xc002d38c60) (0xc002d5c5a0) Stream removed, broadcasting: 5
Feb  7 13:39:34.801: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  7 13:39:34.801: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:34.801: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:34.863811       8 log.go:172] (0xc002431ef0) (0xc002982640) Create stream
I0207 13:39:34.863883       8 log.go:172] (0xc002431ef0) (0xc002982640) Stream added, broadcasting: 1
I0207 13:39:34.872612       8 log.go:172] (0xc002431ef0) Reply frame received for 1
I0207 13:39:34.872677       8 log.go:172] (0xc002431ef0) (0xc002d5c640) Create stream
I0207 13:39:34.872685       8 log.go:172] (0xc002431ef0) (0xc002d5c640) Stream added, broadcasting: 3
I0207 13:39:34.874767       8 log.go:172] (0xc002431ef0) Reply frame received for 3
I0207 13:39:34.874839       8 log.go:172] (0xc002431ef0) (0xc0029826e0) Create stream
I0207 13:39:34.874885       8 log.go:172] (0xc002431ef0) (0xc0029826e0) Stream added, broadcasting: 5
I0207 13:39:34.878010       8 log.go:172] (0xc002431ef0) Reply frame received for 5
I0207 13:39:34.956811       8 log.go:172] (0xc002431ef0) Data frame received for 3
I0207 13:39:34.956861       8 log.go:172] (0xc002d5c640) (3) Data frame handling
I0207 13:39:34.956870       8 log.go:172] (0xc002d5c640) (3) Data frame sent
I0207 13:39:35.078915       8 log.go:172] (0xc002431ef0) Data frame received for 1
I0207 13:39:35.079076       8 log.go:172] (0xc002982640) (1) Data frame handling
I0207 13:39:35.079094       8 log.go:172] (0xc002982640) (1) Data frame sent
I0207 13:39:35.079693       8 log.go:172] (0xc002431ef0) (0xc002982640) Stream removed, broadcasting: 1
I0207 13:39:35.079984       8 log.go:172] (0xc002431ef0) (0xc0029826e0) Stream removed, broadcasting: 5
I0207 13:39:35.080029       8 log.go:172] (0xc002431ef0) (0xc002d5c640) Stream removed, broadcasting: 3
I0207 13:39:35.080057       8 log.go:172] (0xc002431ef0) (0xc002982640) Stream removed, broadcasting: 1
I0207 13:39:35.080065       8 log.go:172] (0xc002431ef0) (0xc002d5c640) Stream removed, broadcasting: 3
I0207 13:39:35.080072       8 log.go:172] (0xc002431ef0) (0xc0029826e0) Stream removed, broadcasting: 5
Feb  7 13:39:35.080: INFO: Exec stderr: ""
Feb  7 13:39:35.080: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:35.080: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:35.122511       8 log.go:172] (0xc003148f20) (0xc0029828c0) Create stream
I0207 13:39:35.122567       8 log.go:172] (0xc003148f20) (0xc0029828c0) Stream added, broadcasting: 1
I0207 13:39:35.134603       8 log.go:172] (0xc003148f20) Reply frame received for 1
I0207 13:39:35.134864       8 log.go:172] (0xc003148f20) (0xc00352e000) Create stream
I0207 13:39:35.134909       8 log.go:172] (0xc003148f20) (0xc00352e000) Stream added, broadcasting: 3
I0207 13:39:35.138291       8 log.go:172] (0xc003148f20) Reply frame received for 3
I0207 13:39:35.138322       8 log.go:172] (0xc003148f20) (0xc001120000) Create stream
I0207 13:39:35.138330       8 log.go:172] (0xc003148f20) (0xc001120000) Stream added, broadcasting: 5
I0207 13:39:35.139468       8 log.go:172] (0xc003148f20) Reply frame received for 5
I0207 13:39:35.334161       8 log.go:172] (0xc003148f20) Data frame received for 3
I0207 13:39:35.334352       8 log.go:172] (0xc00352e000) (3) Data frame handling
I0207 13:39:35.334448       8 log.go:172] (0xc00352e000) (3) Data frame sent
I0207 13:39:35.468090       8 log.go:172] (0xc003148f20) (0xc00352e000) Stream removed, broadcasting: 3
I0207 13:39:35.468206       8 log.go:172] (0xc003148f20) Data frame received for 1
I0207 13:39:35.468248       8 log.go:172] (0xc003148f20) (0xc001120000) Stream removed, broadcasting: 5
I0207 13:39:35.468360       8 log.go:172] (0xc0029828c0) (1) Data frame handling
I0207 13:39:35.468392       8 log.go:172] (0xc0029828c0) (1) Data frame sent
I0207 13:39:35.468411       8 log.go:172] (0xc003148f20) (0xc0029828c0) Stream removed, broadcasting: 1
I0207 13:39:35.468426       8 log.go:172] (0xc003148f20) Go away received
I0207 13:39:35.468536       8 log.go:172] (0xc003148f20) (0xc0029828c0) Stream removed, broadcasting: 1
I0207 13:39:35.468548       8 log.go:172] (0xc003148f20) (0xc00352e000) Stream removed, broadcasting: 3
I0207 13:39:35.468560       8 log.go:172] (0xc003148f20) (0xc001120000) Stream removed, broadcasting: 5
Feb  7 13:39:35.468: INFO: Exec stderr: ""
Feb  7 13:39:35.468: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:35.468: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:35.526093       8 log.go:172] (0xc00352a6e0) (0xc000fe0460) Create stream
I0207 13:39:35.526142       8 log.go:172] (0xc00352a6e0) (0xc000fe0460) Stream added, broadcasting: 1
I0207 13:39:35.532521       8 log.go:172] (0xc00352a6e0) Reply frame received for 1
I0207 13:39:35.532564       8 log.go:172] (0xc00352a6e0) (0xc0011201e0) Create stream
I0207 13:39:35.532578       8 log.go:172] (0xc00352a6e0) (0xc0011201e0) Stream added, broadcasting: 3
I0207 13:39:35.533800       8 log.go:172] (0xc00352a6e0) Reply frame received for 3
I0207 13:39:35.533827       8 log.go:172] (0xc00352a6e0) (0xc0013840a0) Create stream
I0207 13:39:35.533836       8 log.go:172] (0xc00352a6e0) (0xc0013840a0) Stream added, broadcasting: 5
I0207 13:39:35.534787       8 log.go:172] (0xc00352a6e0) Reply frame received for 5
I0207 13:39:35.613562       8 log.go:172] (0xc00352a6e0) Data frame received for 3
I0207 13:39:35.613597       8 log.go:172] (0xc0011201e0) (3) Data frame handling
I0207 13:39:35.613614       8 log.go:172] (0xc0011201e0) (3) Data frame sent
I0207 13:39:35.747217       8 log.go:172] (0xc00352a6e0) Data frame received for 1
I0207 13:39:35.747330       8 log.go:172] (0xc000fe0460) (1) Data frame handling
I0207 13:39:35.747356       8 log.go:172] (0xc000fe0460) (1) Data frame sent
I0207 13:39:35.747375       8 log.go:172] (0xc00352a6e0) (0xc000fe0460) Stream removed, broadcasting: 1
I0207 13:39:35.747422       8 log.go:172] (0xc00352a6e0) (0xc0011201e0) Stream removed, broadcasting: 3
I0207 13:39:35.747519       8 log.go:172] (0xc00352a6e0) (0xc0013840a0) Stream removed, broadcasting: 5
I0207 13:39:35.747570       8 log.go:172] (0xc00352a6e0) Go away received
I0207 13:39:35.747594       8 log.go:172] (0xc00352a6e0) (0xc000fe0460) Stream removed, broadcasting: 1
I0207 13:39:35.747616       8 log.go:172] (0xc00352a6e0) (0xc0011201e0) Stream removed, broadcasting: 3
I0207 13:39:35.747631       8 log.go:172] (0xc00352a6e0) (0xc0013840a0) Stream removed, broadcasting: 5
Feb  7 13:39:35.747: INFO: Exec stderr: ""
Feb  7 13:39:35.747: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1682 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:39:35.747: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:39:35.843342       8 log.go:172] (0xc0005ce9a0) (0xc0013845a0) Create stream
I0207 13:39:35.843376       8 log.go:172] (0xc0005ce9a0) (0xc0013845a0) Stream added, broadcasting: 1
I0207 13:39:35.848224       8 log.go:172] (0xc0005ce9a0) Reply frame received for 1
I0207 13:39:35.848270       8 log.go:172] (0xc0005ce9a0) (0xc0011203c0) Create stream
I0207 13:39:35.848280       8 log.go:172] (0xc0005ce9a0) (0xc0011203c0) Stream added, broadcasting: 3
I0207 13:39:35.849938       8 log.go:172] (0xc0005ce9a0) Reply frame received for 3
I0207 13:39:35.849970       8 log.go:172] (0xc0005ce9a0) (0xc00031a0a0) Create stream
I0207 13:39:35.849979       8 log.go:172] (0xc0005ce9a0) (0xc00031a0a0) Stream added, broadcasting: 5
I0207 13:39:35.855603       8 log.go:172] (0xc0005ce9a0) Reply frame received for 5
I0207 13:39:35.991735       8 log.go:172] (0xc0005ce9a0) Data frame received for 3
I0207 13:39:35.991863       8 log.go:172] (0xc0011203c0) (3) Data frame handling
I0207 13:39:35.991889       8 log.go:172] (0xc0011203c0) (3) Data frame sent
I0207 13:39:36.158038       8 log.go:172] (0xc0005ce9a0) (0xc0011203c0) Stream removed, broadcasting: 3
I0207 13:39:36.158169       8 log.go:172] (0xc0005ce9a0) Data frame received for 1
I0207 13:39:36.158207       8 log.go:172] (0xc0013845a0) (1) Data frame handling
I0207 13:39:36.158229       8 log.go:172] (0xc0013845a0) (1) Data frame sent
I0207 13:39:36.158275       8 log.go:172] (0xc0005ce9a0) (0xc00031a0a0) Stream removed, broadcasting: 5
I0207 13:39:36.158326       8 log.go:172] (0xc0005ce9a0) (0xc0013845a0) Stream removed, broadcasting: 1
I0207 13:39:36.158377       8 log.go:172] (0xc0005ce9a0) Go away received
I0207 13:39:36.158579       8 log.go:172] (0xc0005ce9a0) (0xc0013845a0) Stream removed, broadcasting: 1
I0207 13:39:36.158608       8 log.go:172] (0xc0005ce9a0) (0xc0011203c0) Stream removed, broadcasting: 3
I0207 13:39:36.158620       8 log.go:172] (0xc0005ce9a0) (0xc00031a0a0) Stream removed, broadcasting: 5
Feb  7 13:39:36.158: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:39:36.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1682" for this suite.
Feb  7 13:40:22.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:40:22.312: INFO: namespace e2e-kubelet-etc-hosts-1682 deletion completed in 46.144437624s

• [SLOW TEST:69.999 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:40:22.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-52xq
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 13:40:22.482: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-52xq" in namespace "subpath-6621" to be "success or failure"
Feb  7 13:40:22.489: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632189ms
Feb  7 13:40:24.502: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019858642s
Feb  7 13:40:26.515: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03322647s
Feb  7 13:40:28.526: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044470671s
Feb  7 13:40:30.539: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057150265s
Feb  7 13:40:32.552: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 10.070419004s
Feb  7 13:40:34.563: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 12.080528595s
Feb  7 13:40:36.577: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 14.094853645s
Feb  7 13:40:38.583: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 16.101465724s
Feb  7 13:40:40.595: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 18.113034829s
Feb  7 13:40:42.609: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 20.126875337s
Feb  7 13:40:44.616: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 22.134291011s
Feb  7 13:40:46.622: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 24.140320105s
Feb  7 13:40:48.632: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 26.150004829s
Feb  7 13:40:50.649: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 28.167144373s
Feb  7 13:40:53.096: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Running", Reason="", readiness=true. Elapsed: 30.613972955s
Feb  7 13:40:55.102: INFO: Pod "pod-subpath-test-secret-52xq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.620338834s
STEP: Saw pod success
Feb  7 13:40:55.102: INFO: Pod "pod-subpath-test-secret-52xq" satisfied condition "success or failure"
Feb  7 13:40:55.106: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-52xq container test-container-subpath-secret-52xq: 
STEP: delete the pod
Feb  7 13:40:55.267: INFO: Waiting for pod pod-subpath-test-secret-52xq to disappear
Feb  7 13:40:55.274: INFO: Pod pod-subpath-test-secret-52xq no longer exists
STEP: Deleting pod pod-subpath-test-secret-52xq
Feb  7 13:40:55.274: INFO: Deleting pod "pod-subpath-test-secret-52xq" in namespace "subpath-6621"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:40:55.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6621" for this suite.
Feb  7 13:41:01.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:41:01.450: INFO: namespace subpath-6621 deletion completed in 6.167784064s

• [SLOW TEST:39.137 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:41:01.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 13:41:01.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6104'
Feb  7 13:41:01.926: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 13:41:01.926: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  7 13:41:01.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6104'
Feb  7 13:41:02.157: INFO: stderr: ""
Feb  7 13:41:02.157: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:41:02.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6104" for this suite.
Feb  7 13:41:08.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:41:08.371: INFO: namespace kubectl-6104 deletion completed in 6.195963383s

• [SLOW TEST:6.922 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:41:08.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:41:08.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7" in namespace "projected-2255" to be "success or failure"
Feb  7 13:41:08.546: INFO: Pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.161291ms
Feb  7 13:41:10.563: INFO: Pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031764565s
Feb  7 13:41:12.575: INFO: Pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044290007s
Feb  7 13:41:14.593: INFO: Pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061805888s
Feb  7 13:41:16.608: INFO: Pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077137685s
STEP: Saw pod success
Feb  7 13:41:16.608: INFO: Pod "downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7" satisfied condition "success or failure"
Feb  7 13:41:16.652: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7 container client-container: 
STEP: delete the pod
Feb  7 13:41:16.744: INFO: Waiting for pod downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7 to disappear
Feb  7 13:41:16.749: INFO: Pod downwardapi-volume-947ccdce-894f-4296-9300-34e56692dbd7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:41:16.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2255" for this suite.
Feb  7 13:41:22.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:41:22.979: INFO: namespace projected-2255 deletion completed in 6.195898702s

• [SLOW TEST:14.607 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:41:22.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  7 13:41:23.137: INFO: Waiting up to 5m0s for pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3" in namespace "containers-4369" to be "success or failure"
Feb  7 13:41:23.169: INFO: Pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.31358ms
Feb  7 13:41:25.179: INFO: Pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041708953s
Feb  7 13:41:27.221: INFO: Pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084148138s
Feb  7 13:41:29.227: INFO: Pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089946052s
Feb  7 13:41:31.235: INFO: Pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098284455s
STEP: Saw pod success
Feb  7 13:41:31.235: INFO: Pod "client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3" satisfied condition "success or failure"
Feb  7 13:41:31.239: INFO: Trying to get logs from node iruya-node pod client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3 container test-container: 
STEP: delete the pod
Feb  7 13:41:31.308: INFO: Waiting for pod client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3 to disappear
Feb  7 13:41:31.325: INFO: Pod client-containers-4f7d08b8-c804-4e56-a378-fc08134766b3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:41:31.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4369" for this suite.
Feb  7 13:41:37.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:41:37.530: INFO: namespace containers-4369 deletion completed in 6.199920053s

• [SLOW TEST:14.551 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:41:37.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-448e79c2-6649-4283-90a2-f7fa1254d253
STEP: Creating a pod to test consume configMaps
Feb  7 13:41:37.674: INFO: Waiting up to 5m0s for pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce" in namespace "configmap-864" to be "success or failure"
Feb  7 13:41:37.712: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 37.785499ms
Feb  7 13:41:39.720: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046295938s
Feb  7 13:41:41.731: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056747157s
Feb  7 13:41:43.746: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072186075s
Feb  7 13:41:45.755: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080732314s
Feb  7 13:41:47.763: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089209638s
STEP: Saw pod success
Feb  7 13:41:47.763: INFO: Pod "pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce" satisfied condition "success or failure"
Feb  7 13:41:47.767: INFO: Trying to get logs from node iruya-node pod pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce container configmap-volume-test: 
STEP: delete the pod
Feb  7 13:41:47.902: INFO: Waiting for pod pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce to disappear
Feb  7 13:41:47.907: INFO: Pod pod-configmaps-014c28f5-ba24-489c-b1b5-8473fdf0e0ce no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:41:47.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-864" for this suite.
Feb  7 13:41:53.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:41:54.110: INFO: namespace configmap-864 deletion completed in 6.196287593s

• [SLOW TEST:16.579 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:41:54.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  7 13:41:54.156: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:42:07.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4942" for this suite.
Feb  7 13:42:13.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:42:13.629: INFO: namespace init-container-4942 deletion completed in 6.300151171s

• [SLOW TEST:19.519 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:42:13.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8876
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 13:42:13.769: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 13:42:52.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:42:52.231: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:42:52.303580       8 log.go:172] (0xc001c4bd90) (0xc001518460) Create stream
I0207 13:42:52.303634       8 log.go:172] (0xc001c4bd90) (0xc001518460) Stream added, broadcasting: 1
I0207 13:42:52.316346       8 log.go:172] (0xc001c4bd90) Reply frame received for 1
I0207 13:42:52.316379       8 log.go:172] (0xc001c4bd90) (0xc001518500) Create stream
I0207 13:42:52.316385       8 log.go:172] (0xc001c4bd90) (0xc001518500) Stream added, broadcasting: 3
I0207 13:42:52.317909       8 log.go:172] (0xc001c4bd90) Reply frame received for 3
I0207 13:42:52.317976       8 log.go:172] (0xc001c4bd90) (0xc00129e5a0) Create stream
I0207 13:42:52.318011       8 log.go:172] (0xc001c4bd90) (0xc00129e5a0) Stream added, broadcasting: 5
I0207 13:42:52.320293       8 log.go:172] (0xc001c4bd90) Reply frame received for 5
I0207 13:42:52.471496       8 log.go:172] (0xc001c4bd90) Data frame received for 3
I0207 13:42:52.471767       8 log.go:172] (0xc001518500) (3) Data frame handling
I0207 13:42:52.471854       8 log.go:172] (0xc001518500) (3) Data frame sent
I0207 13:42:52.719205       8 log.go:172] (0xc001c4bd90) Data frame received for 1
I0207 13:42:52.719464       8 log.go:172] (0xc001518460) (1) Data frame handling
I0207 13:42:52.719506       8 log.go:172] (0xc001518460) (1) Data frame sent
I0207 13:42:52.719540       8 log.go:172] (0xc001c4bd90) (0xc001518460) Stream removed, broadcasting: 1
I0207 13:42:52.719741       8 log.go:172] (0xc001c4bd90) (0xc001518500) Stream removed, broadcasting: 3
I0207 13:42:52.719800       8 log.go:172] (0xc001c4bd90) (0xc00129e5a0) Stream removed, broadcasting: 5
I0207 13:42:52.719850       8 log.go:172] (0xc001c4bd90) (0xc001518460) Stream removed, broadcasting: 1
I0207 13:42:52.719883       8 log.go:172] (0xc001c4bd90) (0xc001518500) Stream removed, broadcasting: 3
I0207 13:42:52.719909       8 log.go:172] (0xc001c4bd90) (0xc00129e5a0) Stream removed, broadcasting: 5
Feb  7 13:42:52.720: INFO: Waiting for endpoints: map[]
I0207 13:42:52.720645       8 log.go:172] (0xc001c4bd90) Go away received
Feb  7 13:42:52.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:42:52.729: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:42:52.849319       8 log.go:172] (0xc00263edc0) (0xc001519040) Create stream
I0207 13:42:52.849442       8 log.go:172] (0xc00263edc0) (0xc001519040) Stream added, broadcasting: 1
I0207 13:42:52.866872       8 log.go:172] (0xc00263edc0) Reply frame received for 1
I0207 13:42:52.866959       8 log.go:172] (0xc00263edc0) (0xc0022c05a0) Create stream
I0207 13:42:52.866966       8 log.go:172] (0xc00263edc0) (0xc0022c05a0) Stream added, broadcasting: 3
I0207 13:42:52.870170       8 log.go:172] (0xc00263edc0) Reply frame received for 3
I0207 13:42:52.870199       8 log.go:172] (0xc00263edc0) (0xc00129e820) Create stream
I0207 13:42:52.870216       8 log.go:172] (0xc00263edc0) (0xc00129e820) Stream added, broadcasting: 5
I0207 13:42:52.878339       8 log.go:172] (0xc00263edc0) Reply frame received for 5
I0207 13:42:53.006877       8 log.go:172] (0xc00263edc0) Data frame received for 3
I0207 13:42:53.006937       8 log.go:172] (0xc0022c05a0) (3) Data frame handling
I0207 13:42:53.006961       8 log.go:172] (0xc0022c05a0) (3) Data frame sent
I0207 13:42:53.168100       8 log.go:172] (0xc00263edc0) Data frame received for 1
I0207 13:42:53.168152       8 log.go:172] (0xc001519040) (1) Data frame handling
I0207 13:42:53.168174       8 log.go:172] (0xc001519040) (1) Data frame sent
I0207 13:42:53.168190       8 log.go:172] (0xc00263edc0) (0xc0022c05a0) Stream removed, broadcasting: 3
I0207 13:42:53.168242       8 log.go:172] (0xc00263edc0) (0xc00129e820) Stream removed, broadcasting: 5
I0207 13:42:53.168265       8 log.go:172] (0xc00263edc0) (0xc001519040) Stream removed, broadcasting: 1
I0207 13:42:53.168278       8 log.go:172] (0xc00263edc0) Go away received
I0207 13:42:53.168330       8 log.go:172] (0xc00263edc0) (0xc001519040) Stream removed, broadcasting: 1
I0207 13:42:53.168351       8 log.go:172] (0xc00263edc0) (0xc0022c05a0) Stream removed, broadcasting: 3
I0207 13:42:53.168361       8 log.go:172] (0xc00263edc0) (0xc00129e820) Stream removed, broadcasting: 5
Feb  7 13:42:53.168: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:42:53.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8876" for this suite.
Feb  7 13:43:17.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:43:17.318: INFO: namespace pod-network-test-8876 deletion completed in 24.142475452s

• [SLOW TEST:63.687 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:43:17.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0207 13:43:59.554366       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 13:43:59.554: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:43:59.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3475" for this suite.
Feb  7 13:44:09.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:44:09.755: INFO: namespace gc-3475 deletion completed in 10.195278488s

• [SLOW TEST:52.437 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:44:09.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  7 13:44:11.583: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  7 13:44:16.601: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:44:16.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8736" for this suite.
Feb  7 13:44:22.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:44:22.991: INFO: namespace replication-controller-8736 deletion completed in 6.192418056s

• [SLOW TEST:13.235 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:44:22.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-aa18e8d8-4793-4651-99a0-a49347a94148
STEP: Creating a pod to test consume secrets
Feb  7 13:44:23.312: INFO: Waiting up to 5m0s for pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20" in namespace "secrets-9757" to be "success or failure"
Feb  7 13:44:23.362: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 49.293008ms
Feb  7 13:44:25.379: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067022774s
Feb  7 13:44:27.387: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074461223s
Feb  7 13:44:29.403: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090743432s
Feb  7 13:44:31.411: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098077752s
Feb  7 13:44:33.418: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105369491s
Feb  7 13:44:35.423: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.110466801s
STEP: Saw pod success
Feb  7 13:44:35.423: INFO: Pod "pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20" satisfied condition "success or failure"
Feb  7 13:44:35.426: INFO: Trying to get logs from node iruya-node pod pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20 container secret-volume-test: 
STEP: delete the pod
Feb  7 13:44:35.521: INFO: Waiting for pod pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20 to disappear
Feb  7 13:44:35.535: INFO: Pod pod-secrets-2bd700b7-7117-468b-bb02-8153e90f2f20 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:44:35.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9757" for this suite.
Feb  7 13:44:41.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:44:41.843: INFO: namespace secrets-9757 deletion completed in 6.303313534s

• [SLOW TEST:18.851 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:44:41.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-5073a0b4-f378-4dcd-ba33-bbf55977bc8f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5073a0b4-f378-4dcd-ba33-bbf55977bc8f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:44:54.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3443" for this suite.
Feb  7 13:45:16.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:45:16.422: INFO: namespace configmap-3443 deletion completed in 22.123148539s

• [SLOW TEST:34.579 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:45:16.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:45:16.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370" in namespace "projected-7789" to be "success or failure"
Feb  7 13:45:16.732: INFO: Pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694326ms
Feb  7 13:45:19.363: INFO: Pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637945541s
Feb  7 13:45:21.373: INFO: Pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370": Phase="Pending", Reason="", readiness=false. Elapsed: 4.647333461s
Feb  7 13:45:23.381: INFO: Pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655702072s
Feb  7 13:45:25.389: INFO: Pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.663911282s
STEP: Saw pod success
Feb  7 13:45:25.389: INFO: Pod "downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370" satisfied condition "success or failure"
Feb  7 13:45:25.395: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370 container client-container: 
STEP: delete the pod
Feb  7 13:45:25.623: INFO: Waiting for pod downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370 to disappear
Feb  7 13:45:25.675: INFO: Pod downwardapi-volume-03df57e4-88a2-4d30-8725-a9f3aaac8370 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:45:25.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7789" for this suite.
Feb  7 13:45:31.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:45:31.967: INFO: namespace projected-7789 deletion completed in 6.2826111s

• [SLOW TEST:15.544 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:45:31.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-qml6w in namespace proxy-2838
I0207 13:45:32.154649       8 runners.go:180] Created replication controller with name: proxy-service-qml6w, namespace: proxy-2838, replica count: 1
I0207 13:45:33.205404       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 13:45:34.205653       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 13:45:35.206067       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 13:45:36.206459       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 13:45:37.206743       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 13:45:38.207763       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 13:45:39.208021       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 13:45:40.208370       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 13:45:41.208883       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 13:45:42.209099       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 13:45:43.209501       8 runners.go:180] proxy-service-qml6w Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 13:45:43.216: INFO: setup took 11.193679121s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  7 13:45:43.256: INFO: (0) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 40.315282ms)
Feb  7 13:45:43.257: INFO: (0) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 40.902759ms)
Feb  7 13:45:43.257: INFO: (0) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 41.221591ms)
Feb  7 13:45:43.260: INFO: (0) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 44.087054ms)
Feb  7 13:45:43.260: INFO: (0) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 44.091305ms)
Feb  7 13:45:43.261: INFO: (0) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 44.621948ms)
Feb  7 13:45:43.261: INFO: (0) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 44.699094ms)
Feb  7 13:45:43.261: INFO: (0) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 44.681766ms)
Feb  7 13:45:43.261: INFO: (0) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 44.749688ms)
Feb  7 13:45:43.263: INFO: (0) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 46.67208ms)
Feb  7 13:45:43.263: INFO: (0) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 46.872099ms)
Feb  7 13:45:43.268: INFO: (0) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test (200; 20.094596ms)
Feb  7 13:45:43.300: INFO: (1) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 20.73861ms)
Feb  7 13:45:43.300: INFO: (1) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 23.687797ms)
Feb  7 13:45:43.303: INFO: (1) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 23.753051ms)
Feb  7 13:45:43.303: INFO: (1) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 23.80533ms)
Feb  7 13:45:43.303: INFO: (1) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 24.388802ms)
Feb  7 13:45:43.304: INFO: (1) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 24.896851ms)
Feb  7 13:45:43.304: INFO: (1) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 25.249224ms)
Feb  7 13:45:43.304: INFO: (1) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 25.390481ms)
Feb  7 13:45:43.304: INFO: (1) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 25.363882ms)
Feb  7 13:45:43.305: INFO: (1) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 26.119215ms)
Feb  7 13:45:43.313: INFO: (2) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 8.029008ms)
Feb  7 13:45:43.317: INFO: (2) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 11.625843ms)
Feb  7 13:45:43.317: INFO: (2) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 11.551903ms)
Feb  7 13:45:43.317: INFO: (2) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 12.536119ms)
Feb  7 13:45:43.318: INFO: (2) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 12.575119ms)
Feb  7 13:45:43.318: INFO: (2) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 12.675606ms)
Feb  7 13:45:43.318: INFO: (2) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 12.596304ms)
Feb  7 13:45:43.318: INFO: (2) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 13.135696ms)
Feb  7 13:45:43.318: INFO: (2) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 13.249486ms)
Feb  7 13:45:43.322: INFO: (2) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 16.498193ms)
Feb  7 13:45:43.323: INFO: (2) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 17.616757ms)
Feb  7 13:45:43.323: INFO: (2) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 17.831079ms)
Feb  7 13:45:43.324: INFO: (2) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 19.01691ms)
Feb  7 13:45:43.325: INFO: (2) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 19.920746ms)
Feb  7 13:45:43.330: INFO: (3) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 4.78145ms)
Feb  7 13:45:43.337: INFO: (3) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 11.872922ms)
Feb  7 13:45:43.337: INFO: (3) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: ... (200; 12.323053ms)
Feb  7 13:45:43.338: INFO: (3) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 12.47591ms)
Feb  7 13:45:43.338: INFO: (3) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 12.593448ms)
Feb  7 13:45:43.338: INFO: (3) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 13.365881ms)
Feb  7 13:45:43.339: INFO: (3) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 13.780787ms)
Feb  7 13:45:43.339: INFO: (3) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 13.995583ms)
Feb  7 13:45:43.340: INFO: (3) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 14.473381ms)
Feb  7 13:45:43.340: INFO: (3) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 14.894621ms)
Feb  7 13:45:43.340: INFO: (3) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 15.102265ms)
Feb  7 13:45:43.347: INFO: (4) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 6.062852ms)
Feb  7 13:45:43.347: INFO: (4) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 6.327797ms)
Feb  7 13:45:43.347: INFO: (4) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 6.400259ms)
Feb  7 13:45:43.347: INFO: (4) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 6.799447ms)
Feb  7 13:45:43.349: INFO: (4) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 8.619401ms)
Feb  7 13:45:43.350: INFO: (4) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 9.772485ms)
Feb  7 13:45:43.350: INFO: (4) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 9.67955ms)
Feb  7 13:45:43.351: INFO: (4) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test (200; 17.586039ms)
Feb  7 13:45:43.358: INFO: (4) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 17.440171ms)
Feb  7 13:45:43.358: INFO: (4) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 17.923843ms)
Feb  7 13:45:43.358: INFO: (4) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 18.098892ms)
Feb  7 13:45:43.367: INFO: (5) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 8.202278ms)
Feb  7 13:45:43.367: INFO: (5) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 8.459205ms)
Feb  7 13:45:43.367: INFO: (5) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 8.804336ms)
Feb  7 13:45:43.368: INFO: (5) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 9.557808ms)
Feb  7 13:45:43.370: INFO: (5) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 10.878843ms)
Feb  7 13:45:43.371: INFO: (5) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 11.989138ms)
Feb  7 13:45:43.371: INFO: (5) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 11.999706ms)
Feb  7 13:45:43.371: INFO: (5) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 12.01391ms)
Feb  7 13:45:43.371: INFO: (5) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 12.75726ms)
Feb  7 13:45:43.375: INFO: (5) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 15.964037ms)
Feb  7 13:45:43.375: INFO: (5) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 16.183303ms)
Feb  7 13:45:43.375: INFO: (5) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 16.484741ms)
Feb  7 13:45:43.377: INFO: (5) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 18.876373ms)
Feb  7 13:45:43.380: INFO: (5) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 21.284462ms)
Feb  7 13:45:43.387: INFO: (6) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 7.514292ms)
Feb  7 13:45:43.391: INFO: (6) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 10.897924ms)
Feb  7 13:45:43.391: INFO: (6) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 11.000449ms)
Feb  7 13:45:43.391: INFO: (6) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 11.200031ms)
Feb  7 13:45:43.391: INFO: (6) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 11.21991ms)
Feb  7 13:45:43.392: INFO: (6) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 12.055508ms)
Feb  7 13:45:43.393: INFO: (6) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 12.570352ms)
Feb  7 13:45:43.393: INFO: (6) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: ... (200; 12.723259ms)
Feb  7 13:45:43.411: INFO: (7) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 12.805713ms)
Feb  7 13:45:43.413: INFO: (7) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 14.303432ms)
Feb  7 13:45:43.413: INFO: (7) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 14.309307ms)
Feb  7 13:45:43.413: INFO: (7) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 14.340869ms)
Feb  7 13:45:43.413: INFO: (7) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 14.372741ms)
Feb  7 13:45:43.414: INFO: (7) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 20.401162ms)
Feb  7 13:45:43.437: INFO: (8) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 21.645731ms)
Feb  7 13:45:43.437: INFO: (8) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 21.639266ms)
Feb  7 13:45:43.437: INFO: (8) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 21.858732ms)
Feb  7 13:45:43.438: INFO: (8) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 22.668443ms)
Feb  7 13:45:43.439: INFO: (8) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 23.282418ms)
Feb  7 13:45:43.439: INFO: (8) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 23.334839ms)
Feb  7 13:45:43.439: INFO: (8) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 23.629589ms)
Feb  7 13:45:43.439: INFO: (8) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test (200; 25.210561ms)
Feb  7 13:45:43.450: INFO: (9) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 9.838374ms)
Feb  7 13:45:43.450: INFO: (9) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 9.92161ms)
Feb  7 13:45:43.451: INFO: (9) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 17.75387ms)
Feb  7 13:45:43.459: INFO: (9) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 18.035595ms)
Feb  7 13:45:43.459: INFO: (9) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 18.255037ms)
Feb  7 13:45:43.459: INFO: (9) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 18.310425ms)
Feb  7 13:45:43.459: INFO: (9) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 18.34913ms)
Feb  7 13:45:43.460: INFO: (9) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 18.860312ms)
Feb  7 13:45:43.460: INFO: (9) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 18.930547ms)
Feb  7 13:45:43.460: INFO: (9) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 19.304706ms)
Feb  7 13:45:43.463: INFO: (9) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 22.641599ms)
Feb  7 13:45:43.464: INFO: (9) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 22.955573ms)
Feb  7 13:45:43.464: INFO: (9) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 23.166924ms)
Feb  7 13:45:43.464: INFO: (9) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 23.34339ms)
Feb  7 13:45:43.472: INFO: (10) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 7.927618ms)
Feb  7 13:45:43.472: INFO: (10) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 7.764365ms)
Feb  7 13:45:43.472: INFO: (10) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 7.99293ms)
Feb  7 13:45:43.474: INFO: (10) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 9.246773ms)
Feb  7 13:45:43.474: INFO: (10) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 10.397141ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 10.537453ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 11.010899ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 10.445124ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 11.055256ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 10.60079ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 11.056537ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 12.245656ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 11.62351ms)
Feb  7 13:45:43.476: INFO: (10) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 10.760671ms)
Feb  7 13:45:43.477: INFO: (10) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 11.936245ms)
Feb  7 13:45:43.482: INFO: (11) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 4.740965ms)
Feb  7 13:45:43.482: INFO: (11) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 4.806269ms)
Feb  7 13:45:43.486: INFO: (11) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 9.412968ms)
Feb  7 13:45:43.487: INFO: (11) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 9.297895ms)
Feb  7 13:45:43.487: INFO: (11) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 9.602892ms)
Feb  7 13:45:43.487: INFO: (11) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 9.658356ms)
Feb  7 13:45:43.487: INFO: (11) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 9.697232ms)
Feb  7 13:45:43.487: INFO: (11) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 9.721233ms)
Feb  7 13:45:43.490: INFO: (11) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 12.947217ms)
Feb  7 13:45:43.490: INFO: (11) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 12.873738ms)
Feb  7 13:45:43.491: INFO: (11) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 13.431804ms)
Feb  7 13:45:43.491: INFO: (11) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 13.632592ms)
Feb  7 13:45:43.491: INFO: (11) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 13.593572ms)
Feb  7 13:45:43.491: INFO: (11) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 13.680996ms)
Feb  7 13:45:43.491: INFO: (11) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 13.942814ms)
Feb  7 13:45:43.499: INFO: (12) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 7.760794ms)
Feb  7 13:45:43.499: INFO: (12) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 7.773804ms)
Feb  7 13:45:43.500: INFO: (12) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 7.936724ms)
Feb  7 13:45:43.500: INFO: (12) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 7.882278ms)
Feb  7 13:45:43.500: INFO: (12) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 8.064572ms)
Feb  7 13:45:43.500: INFO: (12) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 8.949665ms)
Feb  7 13:45:43.501: INFO: (12) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test (200; 9.337043ms)
Feb  7 13:45:43.501: INFO: (12) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 9.827657ms)
Feb  7 13:45:43.501: INFO: (12) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 9.94191ms)
Feb  7 13:45:43.502: INFO: (12) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 10.0952ms)
Feb  7 13:45:43.502: INFO: (12) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 10.215822ms)
Feb  7 13:45:43.502: INFO: (12) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 10.251577ms)
Feb  7 13:45:43.502: INFO: (12) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 10.441477ms)
Feb  7 13:45:43.510: INFO: (13) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 8.175076ms)
Feb  7 13:45:43.511: INFO: (13) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 8.213058ms)
Feb  7 13:45:43.511: INFO: (13) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 9.101484ms)
Feb  7 13:45:43.512: INFO: (13) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 9.435149ms)
Feb  7 13:45:43.512: INFO: (13) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 10.206911ms)
Feb  7 13:45:43.513: INFO: (13) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 10.264168ms)
Feb  7 13:45:43.513: INFO: (13) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 10.350893ms)
Feb  7 13:45:43.515: INFO: (13) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 12.526319ms)
Feb  7 13:45:43.515: INFO: (13) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 12.79169ms)
Feb  7 13:45:43.515: INFO: (13) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 13.151899ms)
Feb  7 13:45:43.516: INFO: (13) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 13.474652ms)
Feb  7 13:45:43.516: INFO: (13) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: ... (200; 10.423113ms)
Feb  7 13:45:43.528: INFO: (14) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 10.371635ms)
Feb  7 13:45:43.528: INFO: (14) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 10.519464ms)
Feb  7 13:45:43.528: INFO: (14) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 10.564716ms)
Feb  7 13:45:43.528: INFO: (14) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 10.66447ms)
Feb  7 13:45:43.528: INFO: (14) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 11.926277ms)
Feb  7 13:45:43.530: INFO: (14) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 11.859838ms)
Feb  7 13:45:43.530: INFO: (14) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 12.014401ms)
Feb  7 13:45:43.530: INFO: (14) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 12.690259ms)
Feb  7 13:45:43.530: INFO: (14) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 12.642581ms)
Feb  7 13:45:43.531: INFO: (14) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 12.921674ms)
Feb  7 13:45:43.531: INFO: (14) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 12.938068ms)
Feb  7 13:45:43.531: INFO: (14) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 12.91661ms)
Feb  7 13:45:43.539: INFO: (15) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 7.915379ms)
Feb  7 13:45:43.539: INFO: (15) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 8.370646ms)
Feb  7 13:45:43.539: INFO: (15) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: ... (200; 10.759933ms)
Feb  7 13:45:43.541: INFO: (15) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 10.73247ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 10.790746ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 10.822887ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 10.898531ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 11.193258ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 11.310421ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 11.197524ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 11.602758ms)
Feb  7 13:45:43.542: INFO: (15) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 11.658142ms)
Feb  7 13:45:43.549: INFO: (16) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 5.906788ms)
Feb  7 13:45:43.549: INFO: (16) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 5.441336ms)
Feb  7 13:45:43.549: INFO: (16) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 6.347148ms)
Feb  7 13:45:43.550: INFO: (16) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 7.27887ms)
Feb  7 13:45:43.550: INFO: (16) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 7.004106ms)
Feb  7 13:45:43.550: INFO: (16) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 6.945731ms)
Feb  7 13:45:43.551: INFO: (16) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname1/proxy/: foo (200; 8.041071ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 10.056697ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 10.580769ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 10.759253ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 11.157381ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:462/proxy/: tls qux (200; 11.711445ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 11.406597ms)
Feb  7 13:45:43.554: INFO: (16) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test<... (200; 11.67155ms)
Feb  7 13:45:43.555: INFO: (16) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 11.261844ms)
Feb  7 13:45:43.562: INFO: (17) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 7.143463ms)
Feb  7 13:45:43.562: INFO: (17) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 7.216702ms)
Feb  7 13:45:43.562: INFO: (17) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 7.449838ms)
Feb  7 13:45:43.562: INFO: (17) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 7.458047ms)
Feb  7 13:45:43.562: INFO: (17) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 7.873229ms)
Feb  7 13:45:43.563: INFO: (17) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 7.91311ms)
Feb  7 13:45:43.563: INFO: (17) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 7.912386ms)
Feb  7 13:45:43.563: INFO: (17) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:460/proxy/: tls baz (200; 7.961789ms)
Feb  7 13:45:43.563: INFO: (17) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: test (200; 7.695846ms)
Feb  7 13:45:43.574: INFO: (18) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 7.676135ms)
Feb  7 13:45:43.574: INFO: (18) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 8.397481ms)
Feb  7 13:45:43.575: INFO: (18) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 8.748693ms)
Feb  7 13:45:43.579: INFO: (18) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: ... (200; 13.21943ms)
Feb  7 13:45:43.580: INFO: (18) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname2/proxy/: tls qux (200; 13.595773ms)
Feb  7 13:45:43.580: INFO: (18) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:160/proxy/: foo (200; 13.548691ms)
Feb  7 13:45:43.580: INFO: (18) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname2/proxy/: bar (200; 13.541038ms)
Feb  7 13:45:43.580: INFO: (18) /api/v1/namespaces/proxy-2838/services/https:proxy-service-qml6w:tlsportname1/proxy/: tls baz (200; 13.536565ms)
Feb  7 13:45:43.580: INFO: (18) /api/v1/namespaces/proxy-2838/services/http:proxy-service-qml6w:portname1/proxy/: foo (200; 13.758699ms)
Feb  7 13:45:43.580: INFO: (18) /api/v1/namespaces/proxy-2838/services/proxy-service-qml6w:portname2/proxy/: bar (200; 13.673951ms)
Feb  7 13:45:43.596: INFO: (19) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:162/proxy/: bar (200; 15.792874ms)
Feb  7 13:45:43.596: INFO: (19) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:162/proxy/: bar (200; 15.887456ms)
Feb  7 13:45:43.596: INFO: (19) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:160/proxy/: foo (200; 15.899253ms)
Feb  7 13:45:43.596: INFO: (19) /api/v1/namespaces/proxy-2838/pods/http:proxy-service-qml6w-59hk8:1080/proxy/: ... (200; 16.035442ms)
Feb  7 13:45:43.596: INFO: (19) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8/proxy/: test (200; 15.981776ms)
Feb  7 13:45:43.596: INFO: (19) /api/v1/namespaces/proxy-2838/pods/proxy-service-qml6w-59hk8:1080/proxy/: test<... (200; 16.017345ms)
Feb  7 13:45:43.597: INFO: (19) /api/v1/namespaces/proxy-2838/pods/https:proxy-service-qml6w-59hk8:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  7 13:46:11.522: INFO: Successfully updated pod "annotationupdate6a78aa2a-31a4-408c-8ce2-194744e2d292"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:46:13.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7839" for this suite.
Feb  7 13:46:35.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:46:35.865: INFO: namespace downward-api-7839 deletion completed in 22.235024378s

• [SLOW TEST:33.072 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:46:35.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:46:35.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2820'
Feb  7 13:46:38.093: INFO: stderr: ""
Feb  7 13:46:38.093: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  7 13:46:38.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2820'
Feb  7 13:46:38.762: INFO: stderr: ""
Feb  7 13:46:38.762: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  7 13:46:39.769: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:39.769: INFO: Found 0 / 1
Feb  7 13:46:40.770: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:40.770: INFO: Found 0 / 1
Feb  7 13:46:41.774: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:41.774: INFO: Found 0 / 1
Feb  7 13:46:42.766: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:42.766: INFO: Found 0 / 1
Feb  7 13:46:43.770: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:43.770: INFO: Found 0 / 1
Feb  7 13:46:44.768: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:44.768: INFO: Found 0 / 1
Feb  7 13:46:45.777: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:45.777: INFO: Found 0 / 1
Feb  7 13:46:46.770: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:46.770: INFO: Found 1 / 1
Feb  7 13:46:46.770: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 13:46:46.777: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 13:46:46.777: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 13:46:46.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-f4dj8 --namespace=kubectl-2820'
Feb  7 13:46:46.974: INFO: stderr: ""
Feb  7 13:46:46.974: INFO: stdout: "Name:           redis-master-f4dj8\nNamespace:      kubectl-2820\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Fri, 07 Feb 2020 13:46:38 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://4b6c91f01f5dfba93363fb793e08c10959215c93f146da2cecdcda7596c0fd43\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 07 Feb 2020 13:46:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ttnf2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-ttnf2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-ttnf2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-2820/redis-master-f4dj8 to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb  7 13:46:46.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2820'
Feb  7 13:46:47.094: INFO: stderr: ""
Feb  7 13:46:47.094: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2820\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-f4dj8\n"
Feb  7 13:46:47.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2820'
Feb  7 13:46:47.177: INFO: stderr: ""
Feb  7 13:46:47.177: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2820\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.102.182.192\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  7 13:46:47.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  7 13:46:47.296: INFO: stderr: ""
Feb  7 13:46:47.296: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 07 Feb 2020 13:46:25 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 07 Feb 2020 13:46:25 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 07 Feb 2020 13:46:25 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 07 Feb 2020 13:46:25 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         187d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         118d\n  kubectl-2820               redis-master-f4dj8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  7 13:46:47.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2820'
Feb  7 13:46:47.385: INFO: stderr: ""
Feb  7 13:46:47.386: INFO: stdout: "Name:         kubectl-2820\nLabels:       e2e-framework=kubectl\n              e2e-run=ab51569a-a689-419f-97fc-36fb3979a759\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:46:47.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2820" for this suite.
Feb  7 13:47:09.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:47:09.576: INFO: namespace kubectl-2820 deletion completed in 22.18590081s

• [SLOW TEST:33.710 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:47:09.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  7 13:47:09.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4559'
Feb  7 13:47:10.065: INFO: stderr: ""
Feb  7 13:47:10.065: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 13:47:10.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4559'
Feb  7 13:47:10.257: INFO: stderr: ""
Feb  7 13:47:10.257: INFO: stdout: "update-demo-nautilus-46rdh update-demo-nautilus-t728q "
Feb  7 13:47:10.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46rdh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4559'
Feb  7 13:47:10.402: INFO: stderr: ""
Feb  7 13:47:10.402: INFO: stdout: ""
Feb  7 13:47:10.402: INFO: update-demo-nautilus-46rdh is created but not running
Feb  7 13:47:15.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4559'
Feb  7 13:47:16.959: INFO: stderr: ""
Feb  7 13:47:16.959: INFO: stdout: "update-demo-nautilus-46rdh update-demo-nautilus-t728q "
Feb  7 13:47:16.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46rdh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4559'
Feb  7 13:47:17.354: INFO: stderr: ""
Feb  7 13:47:17.354: INFO: stdout: ""
Feb  7 13:47:17.355: INFO: update-demo-nautilus-46rdh is created but not running
Feb  7 13:47:22.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4559'
Feb  7 13:47:22.489: INFO: stderr: ""
Feb  7 13:47:22.489: INFO: stdout: "update-demo-nautilus-46rdh update-demo-nautilus-t728q "
Feb  7 13:47:22.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46rdh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4559'
Feb  7 13:47:22.636: INFO: stderr: ""
Feb  7 13:47:22.637: INFO: stdout: "true"
Feb  7 13:47:22.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46rdh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4559'
Feb  7 13:47:22.733: INFO: stderr: ""
Feb  7 13:47:22.733: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 13:47:22.733: INFO: validating pod update-demo-nautilus-46rdh
Feb  7 13:47:22.759: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 13:47:22.759: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 13:47:22.759: INFO: update-demo-nautilus-46rdh is verified up and running
Feb  7 13:47:22.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t728q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4559'
Feb  7 13:47:22.844: INFO: stderr: ""
Feb  7 13:47:22.845: INFO: stdout: "true"
Feb  7 13:47:22.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t728q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4559'
Feb  7 13:47:22.929: INFO: stderr: ""
Feb  7 13:47:22.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 13:47:22.930: INFO: validating pod update-demo-nautilus-t728q
Feb  7 13:47:22.938: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 13:47:22.938: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 13:47:22.938: INFO: update-demo-nautilus-t728q is verified up and running
STEP: using delete to clean up resources
Feb  7 13:47:22.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4559'
Feb  7 13:47:23.138: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:47:23.139: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  7 13:47:23.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4559'
Feb  7 13:47:23.276: INFO: stderr: "No resources found.\n"
Feb  7 13:47:23.276: INFO: stdout: ""
Feb  7 13:47:23.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4559 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 13:47:23.460: INFO: stderr: ""
Feb  7 13:47:23.460: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:47:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4559" for this suite.
Feb  7 13:47:45.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:47:45.565: INFO: namespace kubectl-4559 deletion completed in 22.092375177s

• [SLOW TEST:35.988 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:47:45.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c176f1d6-aa15-4130-945a-317af37ce89e
STEP: Creating a pod to test consume secrets
Feb  7 13:47:45.681: INFO: Waiting up to 5m0s for pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd" in namespace "secrets-7888" to be "success or failure"
Feb  7 13:47:45.697: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.769733ms
Feb  7 13:47:47.722: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041564293s
Feb  7 13:47:49.739: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058175434s
Feb  7 13:47:51.745: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064611772s
Feb  7 13:47:53.764: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083150577s
Feb  7 13:47:55.784: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103290974s
STEP: Saw pod success
Feb  7 13:47:55.784: INFO: Pod "pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd" satisfied condition "success or failure"
Feb  7 13:47:55.793: INFO: Trying to get logs from node iruya-node pod pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd container secret-volume-test: 
STEP: delete the pod
Feb  7 13:47:55.888: INFO: Waiting for pod pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd to disappear
Feb  7 13:47:55.938: INFO: Pod pod-secrets-dfc07eb9-7df3-4f6f-bcb4-0a6e1001addd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:47:55.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7888" for this suite.
Feb  7 13:48:01.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:48:02.115: INFO: namespace secrets-7888 deletion completed in 6.16758736s

• [SLOW TEST:16.549 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:48:02.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:48:02.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb" in namespace "projected-823" to be "success or failure"
Feb  7 13:48:02.270: INFO: Pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb": Phase="Pending", Reason="", readiness=false. Elapsed: 53.299922ms
Feb  7 13:48:04.282: INFO: Pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065498064s
Feb  7 13:48:06.293: INFO: Pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07568625s
Feb  7 13:48:08.301: INFO: Pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084533912s
Feb  7 13:48:10.318: INFO: Pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101220376s
STEP: Saw pod success
Feb  7 13:48:10.318: INFO: Pod "downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb" satisfied condition "success or failure"
Feb  7 13:48:10.332: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb container client-container: 
STEP: delete the pod
Feb  7 13:48:10.404: INFO: Waiting for pod downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb to disappear
Feb  7 13:48:10.409: INFO: Pod downwardapi-volume-f2f880e0-ab25-496c-b8a0-5be0fb071abb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:48:10.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-823" for this suite.
Feb  7 13:48:16.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:48:16.665: INFO: namespace projected-823 deletion completed in 6.243470792s

• [SLOW TEST:14.550 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:48:16.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-36683e33-f5d0-4102-a054-1baedb59278f
STEP: Creating a pod to test consume secrets
Feb  7 13:48:16.923: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b" in namespace "projected-9659" to be "success or failure"
Feb  7 13:48:16.931: INFO: Pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775717ms
Feb  7 13:48:18.948: INFO: Pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025363621s
Feb  7 13:48:20.955: INFO: Pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031895548s
Feb  7 13:48:22.982: INFO: Pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058927085s
Feb  7 13:48:25.012: INFO: Pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089119195s
STEP: Saw pod success
Feb  7 13:48:25.012: INFO: Pod "pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b" satisfied condition "success or failure"
Feb  7 13:48:25.018: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 13:48:25.176: INFO: Waiting for pod pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b to disappear
Feb  7 13:48:25.212: INFO: Pod pod-projected-secrets-97eb7dbd-4428-4e04-8db0-5ba7886ece0b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:48:25.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9659" for this suite.
Feb  7 13:48:31.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:48:31.364: INFO: namespace projected-9659 deletion completed in 6.143022391s

• [SLOW TEST:14.700 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:48:31.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:48:31.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8" in namespace "downward-api-7119" to be "success or failure"
Feb  7 13:48:31.535: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.065298ms
Feb  7 13:48:33.542: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026688257s
Feb  7 13:48:35.589: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073530328s
Feb  7 13:48:37.612: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096240412s
Feb  7 13:48:39.623: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107308538s
Feb  7 13:48:41.633: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11714005s
STEP: Saw pod success
Feb  7 13:48:41.633: INFO: Pod "downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8" satisfied condition "success or failure"
Feb  7 13:48:41.639: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8 container client-container: 
STEP: delete the pod
Feb  7 13:48:41.809: INFO: Waiting for pod downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8 to disappear
Feb  7 13:48:41.828: INFO: Pod downwardapi-volume-2f013e29-1c91-4660-94f9-4e13552b31f8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:48:41.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7119" for this suite.
Feb  7 13:48:47.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:48:48.025: INFO: namespace downward-api-7119 deletion completed in 6.188864506s

• [SLOW TEST:16.660 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:48:48.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:48:48.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7" in namespace "projected-3993" to be "success or failure"
Feb  7 13:48:48.195: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.854448ms
Feb  7 13:48:50.317: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139299365s
Feb  7 13:48:52.323: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145173285s
Feb  7 13:48:54.341: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162832966s
Feb  7 13:48:56.346: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167330209s
Feb  7 13:48:58.360: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181546336s
STEP: Saw pod success
Feb  7 13:48:58.360: INFO: Pod "downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7" satisfied condition "success or failure"
Feb  7 13:48:58.369: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7 container client-container: 
STEP: delete the pod
Feb  7 13:48:58.583: INFO: Waiting for pod downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7 to disappear
Feb  7 13:48:58.587: INFO: Pod downwardapi-volume-4124006d-a061-41d3-9689-578ffd217af7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:48:58.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3993" for this suite.
Feb  7 13:49:04.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:49:04.728: INFO: namespace projected-3993 deletion completed in 6.1365232s

• [SLOW TEST:16.703 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:49:04.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6360/secret-test-58721ef5-533f-4042-adde-632fc71da006
STEP: Creating a pod to test consume secrets
Feb  7 13:49:04.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df" in namespace "secrets-6360" to be "success or failure"
Feb  7 13:49:04.985: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df": Phase="Pending", Reason="", readiness=false. Elapsed: 11.533974ms
Feb  7 13:49:06.993: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018902461s
Feb  7 13:49:08.998: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024390377s
Feb  7 13:49:11.287: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313484322s
Feb  7 13:49:13.295: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321133277s
Feb  7 13:49:15.303: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.32918295s
STEP: Saw pod success
Feb  7 13:49:15.303: INFO: Pod "pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df" satisfied condition "success or failure"
Feb  7 13:49:15.307: INFO: Trying to get logs from node iruya-node pod pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df container env-test: 
STEP: delete the pod
Feb  7 13:49:15.379: INFO: Waiting for pod pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df to disappear
Feb  7 13:49:15.385: INFO: Pod pod-configmaps-98bea40c-a727-4657-8ed3-6374da08b1df no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:49:15.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6360" for this suite.
Feb  7 13:49:21.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:49:21.559: INFO: namespace secrets-6360 deletion completed in 6.166747155s

• [SLOW TEST:16.830 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:49:21.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:49:21.740: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  7 13:49:26.749: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 13:49:28.765: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  7 13:49:30.774: INFO: Creating deployment "test-rollover-deployment"
Feb  7 13:49:30.797: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  7 13:49:32.812: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  7 13:49:32.825: INFO: Ensure that both replica sets have 1 created replica
Feb  7 13:49:32.835: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  7 13:49:32.859: INFO: Updating deployment test-rollover-deployment
Feb  7 13:49:32.859: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  7 13:49:34.905: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  7 13:49:34.911: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  7 13:49:34.916: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:34.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680173, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:36.937: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:36.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680173, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:38.929: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:38.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680173, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:40.950: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:40.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680173, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:42.949: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:42.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680173, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:44.932: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:44.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680182, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:46.935: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:46.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680182, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:48.926: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:48.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680182, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:50.925: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:50.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680182, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:52.929: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:49:52.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680172, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680182, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716680171, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:49:54.949: INFO: 
Feb  7 13:49:54.949: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  7 13:49:54.967: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/deployments/test-rollover-deployment,UID:f0709d24-b9ab-42d6-8d3f-baa0276fc6c6,ResourceVersion:23448479,Generation:2,CreationTimestamp:2020-02-07 13:49:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-07 13:49:32 +0000 UTC 2020-02-07 13:49:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-07 13:49:53 +0000 UTC 2020-02-07 13:49:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  7 13:49:54.973: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/replicasets/test-rollover-deployment-854595fc44,UID:b924cda3-4ba1-4643-b98a-4195f5509d3c,ResourceVersion:23448469,Generation:2,CreationTimestamp:2020-02-07 13:49:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f0709d24-b9ab-42d6-8d3f-baa0276fc6c6 0xc0028d3177 0xc0028d3178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  7 13:49:54.973: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  7 13:49:54.973: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/replicasets/test-rollover-controller,UID:881674fb-5093-477b-907f-030814a5ba1c,ResourceVersion:23448478,Generation:2,CreationTimestamp:2020-02-07 13:49:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f0709d24-b9ab-42d6-8d3f-baa0276fc6c6 0xc0028d30a7 0xc0028d30a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 13:49:54.973: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/replicasets/test-rollover-deployment-9b8b997cf,UID:3647f37c-2df6-4ec8-8bd1-0c0fa684ee12,ResourceVersion:23448431,Generation:2,CreationTimestamp:2020-02-07 13:49:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f0709d24-b9ab-42d6-8d3f-baa0276fc6c6 0xc0028d3240 0xc0028d3241}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 13:49:54.980: INFO: Pod "test-rollover-deployment-854595fc44-6rrrq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-6rrrq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8609,SelfLink:/api/v1/namespaces/deployment-8609/pods/test-rollover-deployment-854595fc44-6rrrq,UID:0f7e57ed-02d5-47ba-80e9-a02f69ed5066,ResourceVersion:23448453,Generation:0,CreationTimestamp:2020-02-07 13:49:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 b924cda3-4ba1-4643-b98a-4195f5509d3c 0xc0031ce2d7 0xc0031ce2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j8b4b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j8b4b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-j8b4b true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031ce350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031ce370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:49:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:49:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:49:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:49:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-07 13:49:33 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-07 13:49:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://388c83cb8a6a806246b38468ed4f012af561b81c2cca873c62387efd2f6c97a6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:49:54.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8609" for this suite.
Feb  7 13:50:01.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:50:01.140: INFO: namespace deployment-8609 deletion completed in 6.154596971s

• [SLOW TEST:39.581 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:50:01.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 13:50:01.466: INFO: Number of nodes with available pods: 0
Feb  7 13:50:01.466: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:02.488: INFO: Number of nodes with available pods: 0
Feb  7 13:50:02.488: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:03.572: INFO: Number of nodes with available pods: 0
Feb  7 13:50:03.572: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:04.489: INFO: Number of nodes with available pods: 0
Feb  7 13:50:04.489: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:05.489: INFO: Number of nodes with available pods: 0
Feb  7 13:50:05.489: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:06.484: INFO: Number of nodes with available pods: 0
Feb  7 13:50:06.484: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:07.684: INFO: Number of nodes with available pods: 0
Feb  7 13:50:07.684: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:08.489: INFO: Number of nodes with available pods: 0
Feb  7 13:50:08.489: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:09.901: INFO: Number of nodes with available pods: 0
Feb  7 13:50:09.901: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:10.482: INFO: Number of nodes with available pods: 0
Feb  7 13:50:10.482: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:11.484: INFO: Number of nodes with available pods: 1
Feb  7 13:50:11.484: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:12.489: INFO: Number of nodes with available pods: 1
Feb  7 13:50:12.489: INFO: Node iruya-node is running more than one daemon pod
Feb  7 13:50:13.485: INFO: Number of nodes with available pods: 2
Feb  7 13:50:13.485: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  7 13:50:13.551: INFO: Number of nodes with available pods: 2
Feb  7 13:50:13.551: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5619, will wait for the garbage collector to delete the pods
Feb  7 13:50:14.708: INFO: Deleting DaemonSet.extensions daemon-set took: 11.017247ms
Feb  7 13:50:15.109: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.331833ms
Feb  7 13:50:27.916: INFO: Number of nodes with available pods: 0
Feb  7 13:50:27.916: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 13:50:27.921: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5619/daemonsets","resourceVersion":"23448608"},"items":null}

Feb  7 13:50:27.924: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5619/pods","resourceVersion":"23448608"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:50:27.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5619" for this suite.
Feb  7 13:50:33.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:50:34.097: INFO: namespace daemonsets-5619 deletion completed in 6.156510716s

• [SLOW TEST:32.956 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:50:34.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0207 13:50:44.438774       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 13:50:44.438: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:50:44.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2376" for this suite.
Feb  7 13:50:51.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:50:51.898: INFO: namespace gc-2376 deletion completed in 7.454694967s

• [SLOW TEST:17.799 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:50:51.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-348ec18d-5c83-4b4c-b423-b64e291ccb73 in namespace container-probe-7116
Feb  7 13:51:02.001: INFO: Started pod test-webserver-348ec18d-5c83-4b4c-b423-b64e291ccb73 in namespace container-probe-7116
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 13:51:02.008: INFO: Initial restart count of pod test-webserver-348ec18d-5c83-4b4c-b423-b64e291ccb73 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:55:02.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7116" for this suite.
Feb  7 13:55:09.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:55:09.180: INFO: namespace container-probe-7116 deletion completed in 6.169522982s

• [SLOW TEST:257.281 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:55:09.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:55:09.234: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:55:09.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2910" for this suite.
Feb  7 13:55:16.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:55:16.261: INFO: namespace custom-resource-definition-2910 deletion completed in 6.263940804s

• [SLOW TEST:7.080 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:55:16.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-hwcb
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 13:55:16.349: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hwcb" in namespace "subpath-5887" to be "success or failure"
Feb  7 13:55:16.354: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659934ms
Feb  7 13:55:18.363: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01401579s
Feb  7 13:55:20.375: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026151049s
Feb  7 13:55:22.386: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03670765s
Feb  7 13:55:24.394: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045143703s
Feb  7 13:55:26.405: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 10.055483942s
Feb  7 13:55:28.412: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 12.062537643s
Feb  7 13:55:30.420: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 14.070944528s
Feb  7 13:55:32.429: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 16.079589358s
Feb  7 13:55:34.436: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 18.086992617s
Feb  7 13:55:36.446: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 20.096972554s
Feb  7 13:55:38.454: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 22.105113323s
Feb  7 13:55:40.465: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 24.115645566s
Feb  7 13:55:42.474: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 26.124464596s
Feb  7 13:55:44.494: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Running", Reason="", readiness=true. Elapsed: 28.144498948s
Feb  7 13:55:46.505: INFO: Pod "pod-subpath-test-configmap-hwcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.155801392s
STEP: Saw pod success
Feb  7 13:55:46.505: INFO: Pod "pod-subpath-test-configmap-hwcb" satisfied condition "success or failure"
Feb  7 13:55:46.511: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-hwcb container test-container-subpath-configmap-hwcb: 
STEP: delete the pod
Feb  7 13:55:46.611: INFO: Waiting for pod pod-subpath-test-configmap-hwcb to disappear
Feb  7 13:55:46.617: INFO: Pod pod-subpath-test-configmap-hwcb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hwcb
Feb  7 13:55:46.618: INFO: Deleting pod "pod-subpath-test-configmap-hwcb" in namespace "subpath-5887"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:55:46.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5887" for this suite.
Feb  7 13:55:52.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:55:52.730: INFO: namespace subpath-5887 deletion completed in 6.104156273s

• [SLOW TEST:36.469 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:55:52.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-9f125cda-285c-4780-a371-94f6729a0aa2
STEP: Creating a pod to test consume configMaps
Feb  7 13:55:52.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5" in namespace "configmap-323" to be "success or failure"
Feb  7 13:55:52.898: INFO: Pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.877893ms
Feb  7 13:55:54.911: INFO: Pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047806827s
Feb  7 13:55:56.926: INFO: Pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062287865s
Feb  7 13:55:58.933: INFO: Pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069637911s
Feb  7 13:56:00.959: INFO: Pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095736408s
STEP: Saw pod success
Feb  7 13:56:00.959: INFO: Pod "pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5" satisfied condition "success or failure"
Feb  7 13:56:00.966: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5 container configmap-volume-test: 
STEP: delete the pod
Feb  7 13:56:01.031: INFO: Waiting for pod pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5 to disappear
Feb  7 13:56:01.078: INFO: Pod pod-configmaps-6ea93454-866d-4dc6-8deb-76db25258bb5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:56:01.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-323" for this suite.
Feb  7 13:56:07.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:56:07.277: INFO: namespace configmap-323 deletion completed in 6.147954477s

• [SLOW TEST:14.547 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:56:07.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b6e6f3bf-e1d2-4fea-94e0-8d2c9f62b163
STEP: Creating a pod to test consume secrets
Feb  7 13:56:07.384: INFO: Waiting up to 5m0s for pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062" in namespace "secrets-9911" to be "success or failure"
Feb  7 13:56:07.427: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062": Phase="Pending", Reason="", readiness=false. Elapsed: 42.805053ms
Feb  7 13:56:09.437: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052466509s
Feb  7 13:56:11.445: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061083669s
Feb  7 13:56:13.501: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116618079s
Feb  7 13:56:15.509: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062": Phase="Running", Reason="", readiness=true. Elapsed: 8.124259219s
Feb  7 13:56:17.523: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138254068s
STEP: Saw pod success
Feb  7 13:56:17.523: INFO: Pod "pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062" satisfied condition "success or failure"
Feb  7 13:56:17.528: INFO: Trying to get logs from node iruya-node pod pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062 container secret-volume-test: 
STEP: delete the pod
Feb  7 13:56:17.601: INFO: Waiting for pod pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062 to disappear
Feb  7 13:56:17.613: INFO: Pod pod-secrets-c0e8028d-7fd6-4fc1-a617-f5c4fb4ff062 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:56:17.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9911" for this suite.
Feb  7 13:56:23.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:56:23.811: INFO: namespace secrets-9911 deletion completed in 6.190004669s

• [SLOW TEST:16.533 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:56:23.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:56:23.963: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.341781ms)
Feb  7 13:56:23.969: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.063587ms)
Feb  7 13:56:23.975: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.983764ms)
Feb  7 13:56:23.982: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.135291ms)
Feb  7 13:56:23.990: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.812787ms)
Feb  7 13:56:23.996: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.366387ms)
Feb  7 13:56:24.001: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.326715ms)
Feb  7 13:56:24.015: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.201305ms)
Feb  7 13:56:24.046: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.823218ms)
Feb  7 13:56:24.053: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.784538ms)
Feb  7 13:56:24.059: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.459587ms)
Feb  7 13:56:24.065: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.62973ms)
Feb  7 13:56:24.071: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.628808ms)
Feb  7 13:56:24.075: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.109716ms)
Feb  7 13:56:24.083: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.310256ms)
Feb  7 13:56:24.117: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.66656ms)
Feb  7 13:56:24.130: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.314824ms)
Feb  7 13:56:24.139: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.581446ms)
Feb  7 13:56:24.147: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.299243ms)
Feb  7 13:56:24.152: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.784245ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:56:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4408" for this suite.
Feb  7 13:56:30.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:56:30.327: INFO: namespace proxy-4408 deletion completed in 6.170547085s

• [SLOW TEST:6.516 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:56:30.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-469f1200-3295-4bfb-b6be-b73a41555e01
STEP: Creating a pod to test consume configMaps
Feb  7 13:56:30.445: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5" in namespace "projected-3512" to be "success or failure"
Feb  7 13:56:30.544: INFO: Pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 99.071523ms
Feb  7 13:56:32.554: INFO: Pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109542994s
Feb  7 13:56:34.570: INFO: Pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124968053s
Feb  7 13:56:36.622: INFO: Pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176986884s
Feb  7 13:56:38.636: INFO: Pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.191712746s
STEP: Saw pod success
Feb  7 13:56:38.637: INFO: Pod "pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5" satisfied condition "success or failure"
Feb  7 13:56:38.641: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 13:56:38.872: INFO: Waiting for pod pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5 to disappear
Feb  7 13:56:38.881: INFO: Pod pod-projected-configmaps-b9b3fd35-db73-4aa1-b01f-c1e4f017b2b5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:56:38.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3512" for this suite.
Feb  7 13:56:44.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:56:45.082: INFO: namespace projected-3512 deletion completed in 6.1973083s

• [SLOW TEST:14.755 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:56:45.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  7 13:56:45.237: INFO: Waiting up to 5m0s for pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5" in namespace "emptydir-7821" to be "success or failure"
Feb  7 13:56:45.241: INFO: Pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.875503ms
Feb  7 13:56:47.249: INFO: Pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011925384s
Feb  7 13:56:49.258: INFO: Pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02149682s
Feb  7 13:56:51.270: INFO: Pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03348822s
Feb  7 13:56:53.279: INFO: Pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042343962s
STEP: Saw pod success
Feb  7 13:56:53.279: INFO: Pod "pod-47accd56-9203-4a86-892a-f6e7c8524af5" satisfied condition "success or failure"
Feb  7 13:56:53.283: INFO: Trying to get logs from node iruya-node pod pod-47accd56-9203-4a86-892a-f6e7c8524af5 container test-container: 
STEP: delete the pod
Feb  7 13:56:53.351: INFO: Waiting for pod pod-47accd56-9203-4a86-892a-f6e7c8524af5 to disappear
Feb  7 13:56:53.363: INFO: Pod pod-47accd56-9203-4a86-892a-f6e7c8524af5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:56:53.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7821" for this suite.
Feb  7 13:56:59.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:56:59.567: INFO: namespace emptydir-7821 deletion completed in 6.194307588s

• [SLOW TEST:14.485 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:56:59.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-677fda65-e5e0-4701-8dff-0afca1d188e5
STEP: Creating a pod to test consume configMaps
Feb  7 13:56:59.701: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0" in namespace "projected-375" to be "success or failure"
Feb  7 13:56:59.709: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.700409ms
Feb  7 13:57:01.718: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017163066s
Feb  7 13:57:03.730: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028729235s
Feb  7 13:57:05.747: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045783382s
Feb  7 13:57:07.758: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056908994s
Feb  7 13:57:09.769: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068109867s
STEP: Saw pod success
Feb  7 13:57:09.769: INFO: Pod "pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0" satisfied condition "success or failure"
Feb  7 13:57:09.774: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 13:57:09.830: INFO: Waiting for pod pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0 to disappear
Feb  7 13:57:09.843: INFO: Pod pod-projected-configmaps-7f269da6-2f47-4d83-a278-e0d1ed660de0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:57:09.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-375" for this suite.
Feb  7 13:57:15.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:57:16.042: INFO: namespace projected-375 deletion completed in 6.191121056s

• [SLOW TEST:16.474 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:57:16.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 13:57:16.114: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  7 13:57:18.962: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 13:57:20.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4973" for this suite.
Feb  7 13:57:28.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:57:28.540: INFO: namespace replication-controller-4973 deletion completed in 8.375774402s

• [SLOW TEST:12.498 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 13:57:28.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1558
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  7 13:57:28.855: INFO: Found 0 stateful pods, waiting for 3
Feb  7 13:57:38.884: INFO: Found 2 stateful pods, waiting for 3
Feb  7 13:57:48.885: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:57:48.886: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:57:48.886: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 13:57:58.877: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:57:58.877: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:57:58.878: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  7 13:57:58.930: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  7 13:58:09.000: INFO: Updating stateful set ss2
Feb  7 13:58:09.110: INFO: Waiting for Pod statefulset-1558/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  7 13:58:19.477: INFO: Found 2 stateful pods, waiting for 3
Feb  7 13:58:29.530: INFO: Found 2 stateful pods, waiting for 3
Feb  7 13:58:39.498: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:58:39.498: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:58:39.498: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  7 13:58:39.566: INFO: Updating stateful set ss2
Feb  7 13:58:39.576: INFO: Waiting for Pod statefulset-1558/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 13:58:50.030: INFO: Updating stateful set ss2
Feb  7 13:58:50.093: INFO: Waiting for StatefulSet statefulset-1558/ss2 to complete update
Feb  7 13:58:50.093: INFO: Waiting for Pod statefulset-1558/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 13:59:00.105: INFO: Waiting for StatefulSet statefulset-1558/ss2 to complete update
Feb  7 13:59:00.105: INFO: Waiting for Pod statefulset-1558/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 13:59:10.104: INFO: Waiting for StatefulSet statefulset-1558/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  7 13:59:20.114: INFO: Deleting all statefulset in ns statefulset-1558
Feb  7 13:59:20.119: INFO: Scaling statefulset ss2 to 0
Feb  7 14:00:00.206: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 14:00:00.213: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:00:00.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1558" for this suite.
Feb  7 14:00:08.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:00:08.400: INFO: namespace statefulset-1558 deletion completed in 8.132800356s

• [SLOW TEST:159.860 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:00:08.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  7 14:00:08.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23449960,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 14:00:08.555: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23449960,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  7 14:00:18.583: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23449974,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  7 14:00:18.583: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23449974,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  7 14:00:28.606: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23449988,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 14:00:28.606: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23449988,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  7 14:00:38.623: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23450004,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 14:00:38.623: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-a,UID:43379a07-e823-443c-9c42-f1514b2a6078,ResourceVersion:23450004,Generation:0,CreationTimestamp:2020-02-07 14:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  7 14:00:48.646: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-b,UID:c0acf5e6-e5bb-42f9-81b0-958b919663de,ResourceVersion:23450018,Generation:0,CreationTimestamp:2020-02-07 14:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 14:00:48.646: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-b,UID:c0acf5e6-e5bb-42f9-81b0-958b919663de,ResourceVersion:23450018,Generation:0,CreationTimestamp:2020-02-07 14:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  7 14:00:58.668: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-b,UID:c0acf5e6-e5bb-42f9-81b0-958b919663de,ResourceVersion:23450032,Generation:0,CreationTimestamp:2020-02-07 14:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 14:00:58.668: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7187,SelfLink:/api/v1/namespaces/watch-7187/configmaps/e2e-watch-test-configmap-b,UID:c0acf5e6-e5bb-42f9-81b0-958b919663de,ResourceVersion:23450032,Generation:0,CreationTimestamp:2020-02-07 14:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:01:08.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7187" for this suite.
Feb  7 14:01:14.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:01:14.825: INFO: namespace watch-7187 deletion completed in 6.14834375s

• [SLOW TEST:66.424 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:01:14.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  7 14:01:14.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6425'
Feb  7 14:01:17.146: INFO: stderr: ""
Feb  7 14:01:17.146: INFO: stdout: "pod/pause created\n"
Feb  7 14:01:17.146: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  7 14:01:17.146: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6425" to be "running and ready"
Feb  7 14:01:17.153: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959129ms
Feb  7 14:01:19.169: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023088994s
Feb  7 14:01:21.175: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028975811s
Feb  7 14:01:23.180: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033511429s
Feb  7 14:01:25.189: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.042625047s
Feb  7 14:01:25.189: INFO: Pod "pause" satisfied condition "running and ready"
Feb  7 14:01:25.189: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  7 14:01:25.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6425'
Feb  7 14:01:25.306: INFO: stderr: ""
Feb  7 14:01:25.307: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  7 14:01:25.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6425'
Feb  7 14:01:25.461: INFO: stderr: ""
Feb  7 14:01:25.461: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  7 14:01:25.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6425'
Feb  7 14:01:25.542: INFO: stderr: ""
Feb  7 14:01:25.542: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  7 14:01:25.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6425'
Feb  7 14:01:25.654: INFO: stderr: ""
Feb  7 14:01:25.654: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  7 14:01:25.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6425'
Feb  7 14:01:25.812: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:01:25.812: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  7 14:01:25.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6425'
Feb  7 14:01:25.951: INFO: stderr: "No resources found.\n"
Feb  7 14:01:25.951: INFO: stdout: ""
Feb  7 14:01:25.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 14:01:26.111: INFO: stderr: ""
Feb  7 14:01:26.111: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:01:26.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6425" for this suite.
Feb  7 14:01:32.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:01:32.292: INFO: namespace kubectl-6425 deletion completed in 6.169613774s

• [SLOW TEST:17.466 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:01:32.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:01:58.404: INFO: Container started at 2020-02-07 14:01:38 +0000 UTC, pod became ready at 2020-02-07 14:01:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:01:58.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2421" for this suite.
Feb  7 14:02:20.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:02:20.618: INFO: namespace container-probe-2421 deletion completed in 22.206218636s

• [SLOW TEST:48.326 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:02:20.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6698
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6698
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6698
Feb  7 14:02:20.775: INFO: Found 0 stateful pods, waiting for 1
Feb  7 14:02:30.787: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  7 14:02:30.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 14:02:31.428: INFO: stderr: "I0207 14:02:31.003038    1070 log.go:172] (0xc0009944d0) (0xc00097ea00) Create stream\nI0207 14:02:31.003105    1070 log.go:172] (0xc0009944d0) (0xc00097ea00) Stream added, broadcasting: 1\nI0207 14:02:31.013432    1070 log.go:172] (0xc0009944d0) Reply frame received for 1\nI0207 14:02:31.013471    1070 log.go:172] (0xc0009944d0) (0xc00097e000) Create stream\nI0207 14:02:31.013482    1070 log.go:172] (0xc0009944d0) (0xc00097e000) Stream added, broadcasting: 3\nI0207 14:02:31.015064    1070 log.go:172] (0xc0009944d0) Reply frame received for 3\nI0207 14:02:31.015081    1070 log.go:172] (0xc0009944d0) (0xc00097e0a0) Create stream\nI0207 14:02:31.015088    1070 log.go:172] (0xc0009944d0) (0xc00097e0a0) Stream added, broadcasting: 5\nI0207 14:02:31.017743    1070 log.go:172] (0xc0009944d0) Reply frame received for 5\nI0207 14:02:31.194731    1070 log.go:172] (0xc0009944d0) Data frame received for 5\nI0207 14:02:31.194804    1070 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0207 14:02:31.194831    1070 log.go:172] (0xc00097e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 14:02:31.243407    1070 log.go:172] (0xc0009944d0) Data frame received for 3\nI0207 14:02:31.243493    1070 log.go:172] (0xc00097e000) (3) Data frame handling\nI0207 14:02:31.243508    1070 log.go:172] (0xc00097e000) (3) Data frame sent\nI0207 14:02:31.419773    1070 log.go:172] (0xc0009944d0) Data frame received for 1\nI0207 14:02:31.419878    1070 log.go:172] (0xc00097ea00) (1) Data frame handling\nI0207 14:02:31.419903    1070 log.go:172] (0xc00097ea00) (1) Data frame sent\nI0207 14:02:31.419919    1070 log.go:172] (0xc0009944d0) (0xc00097ea00) Stream removed, broadcasting: 1\nI0207 14:02:31.420488    1070 log.go:172] (0xc0009944d0) (0xc00097e000) Stream removed, broadcasting: 3\nI0207 14:02:31.420667    1070 log.go:172] (0xc0009944d0) (0xc00097e0a0) Stream removed, broadcasting: 5\nI0207 14:02:31.420743    1070 log.go:172] (0xc0009944d0) (0xc00097ea00) Stream removed, broadcasting: 1\nI0207 14:02:31.420795    1070 log.go:172] (0xc0009944d0) (0xc00097e000) Stream removed, broadcasting: 3\nI0207 14:02:31.420840    1070 log.go:172] (0xc0009944d0) (0xc00097e0a0) Stream removed, broadcasting: 5\nI0207 14:02:31.420973    1070 log.go:172] (0xc0009944d0) Go away received\n"
Feb  7 14:02:31.428: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 14:02:31.428: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 14:02:31.439: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  7 14:02:41.453: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 14:02:41.453: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 14:02:41.477: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  7 14:02:41.477: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:02:41.477: INFO: 
Feb  7 14:02:41.477: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  7 14:02:42.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993854364s
Feb  7 14:02:43.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975743042s
Feb  7 14:02:44.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.936926734s
Feb  7 14:02:45.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.907429528s
Feb  7 14:02:46.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.89587046s
Feb  7 14:02:49.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.884975352s
Feb  7 14:02:50.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.202444637s
Feb  7 14:02:51.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 194.45009ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6698
Feb  7 14:02:52.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:02:52.990: INFO: stderr: "I0207 14:02:52.467598    1088 log.go:172] (0xc00084a420) (0xc000380820) Create stream\nI0207 14:02:52.467757    1088 log.go:172] (0xc00084a420) (0xc000380820) Stream added, broadcasting: 1\nI0207 14:02:52.475337    1088 log.go:172] (0xc00084a420) Reply frame received for 1\nI0207 14:02:52.475413    1088 log.go:172] (0xc00084a420) (0xc000826000) Create stream\nI0207 14:02:52.475438    1088 log.go:172] (0xc00084a420) (0xc000826000) Stream added, broadcasting: 3\nI0207 14:02:52.478141    1088 log.go:172] (0xc00084a420) Reply frame received for 3\nI0207 14:02:52.478160    1088 log.go:172] (0xc00084a420) (0xc0008260a0) Create stream\nI0207 14:02:52.478167    1088 log.go:172] (0xc00084a420) (0xc0008260a0) Stream added, broadcasting: 5\nI0207 14:02:52.479651    1088 log.go:172] (0xc00084a420) Reply frame received for 5\nI0207 14:02:52.744539    1088 log.go:172] (0xc00084a420) Data frame received for 3\nI0207 14:02:52.744875    1088 log.go:172] (0xc00084a420) Data frame received for 5\nI0207 14:02:52.744949    1088 log.go:172] (0xc0008260a0) (5) Data frame handling\nI0207 14:02:52.744973    1088 log.go:172] (0xc0008260a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 14:02:52.745148    1088 log.go:172] (0xc000826000) (3) Data frame handling\nI0207 14:02:52.745170    1088 log.go:172] (0xc000826000) (3) Data frame sent\nI0207 14:02:52.981060    1088 log.go:172] (0xc00084a420) (0xc000826000) Stream removed, broadcasting: 3\nI0207 14:02:52.981180    1088 log.go:172] (0xc00084a420) Data frame received for 1\nI0207 14:02:52.981198    1088 log.go:172] (0xc000380820) (1) Data frame handling\nI0207 14:02:52.981210    1088 log.go:172] (0xc000380820) (1) Data frame sent\nI0207 14:02:52.981275    1088 log.go:172] (0xc00084a420) (0xc000380820) Stream removed, broadcasting: 1\nI0207 14:02:52.981307    1088 log.go:172] (0xc00084a420) (0xc0008260a0) Stream removed, broadcasting: 5\nI0207 14:02:52.981353    1088 log.go:172] (0xc00084a420) Go away received\nI0207 14:02:52.982809    1088 log.go:172] (0xc00084a420) (0xc000380820) Stream removed, broadcasting: 1\nI0207 14:02:52.982838    1088 log.go:172] (0xc00084a420) (0xc000826000) Stream removed, broadcasting: 3\nI0207 14:02:52.982855    1088 log.go:172] (0xc00084a420) (0xc0008260a0) Stream removed, broadcasting: 5\n"
Feb  7 14:02:52.991: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 14:02:52.991: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 14:02:52.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:02:53.453: INFO: stderr: "I0207 14:02:53.246864    1111 log.go:172] (0xc000338370) (0xc000810640) Create stream\nI0207 14:02:53.247014    1111 log.go:172] (0xc000338370) (0xc000810640) Stream added, broadcasting: 1\nI0207 14:02:53.250787    1111 log.go:172] (0xc000338370) Reply frame received for 1\nI0207 14:02:53.250820    1111 log.go:172] (0xc000338370) (0xc0007e6000) Create stream\nI0207 14:02:53.250831    1111 log.go:172] (0xc000338370) (0xc0007e6000) Stream added, broadcasting: 3\nI0207 14:02:53.251929    1111 log.go:172] (0xc000338370) Reply frame received for 3\nI0207 14:02:53.251954    1111 log.go:172] (0xc000338370) (0xc000598140) Create stream\nI0207 14:02:53.251962    1111 log.go:172] (0xc000338370) (0xc000598140) Stream added, broadcasting: 5\nI0207 14:02:53.253457    1111 log.go:172] (0xc000338370) Reply frame received for 5\nI0207 14:02:53.340380    1111 log.go:172] (0xc000338370) Data frame received for 5\nI0207 14:02:53.340453    1111 log.go:172] (0xc000598140) (5) Data frame handling\nI0207 14:02:53.340475    1111 log.go:172] (0xc000598140) (5) Data frame sent\nI0207 14:02:53.340484    1111 log.go:172] (0xc000338370) Data frame received for 5\nI0207 14:02:53.340491    1111 log.go:172] (0xc000598140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0207 14:02:53.340539    1111 log.go:172] (0xc000598140) (5) Data frame sent\nI0207 14:02:53.340553    1111 log.go:172] (0xc000338370) Data frame received for 3\nI0207 14:02:53.340568    1111 log.go:172] (0xc0007e6000) (3) Data frame handling\nI0207 14:02:53.340581    1111 log.go:172] (0xc0007e6000) (3) Data frame sent\nI0207 14:02:53.447373    1111 log.go:172] (0xc000338370) (0xc000598140) Stream removed, broadcasting: 5\nI0207 14:02:53.447428    1111 log.go:172] (0xc000338370) Data frame received for 1\nI0207 14:02:53.447437    1111 log.go:172] (0xc000810640) (1) Data frame handling\nI0207 14:02:53.447446    1111 log.go:172] (0xc000810640) (1) Data frame sent\nI0207 14:02:53.447464    1111 log.go:172] (0xc000338370) (0xc000810640) Stream removed, broadcasting: 1\nI0207 14:02:53.447806    1111 log.go:172] (0xc000338370) (0xc0007e6000) Stream removed, broadcasting: 3\nI0207 14:02:53.447836    1111 log.go:172] (0xc000338370) (0xc000810640) Stream removed, broadcasting: 1\nI0207 14:02:53.447845    1111 log.go:172] (0xc000338370) (0xc0007e6000) Stream removed, broadcasting: 3\nI0207 14:02:53.447853    1111 log.go:172] (0xc000338370) (0xc000598140) Stream removed, broadcasting: 5\n"
Feb  7 14:02:53.453: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 14:02:53.453: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 14:02:53.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:02:54.079: INFO: stderr: "I0207 14:02:53.593183    1131 log.go:172] (0xc000952370) (0xc0008825a0) Create stream\nI0207 14:02:53.593341    1131 log.go:172] (0xc000952370) (0xc0008825a0) Stream added, broadcasting: 1\nI0207 14:02:53.601447    1131 log.go:172] (0xc000952370) Reply frame received for 1\nI0207 14:02:53.601488    1131 log.go:172] (0xc000952370) (0xc0008826e0) Create stream\nI0207 14:02:53.601497    1131 log.go:172] (0xc000952370) (0xc0008826e0) Stream added, broadcasting: 3\nI0207 14:02:53.602755    1131 log.go:172] (0xc000952370) Reply frame received for 3\nI0207 14:02:53.602790    1131 log.go:172] (0xc000952370) (0xc00095c000) Create stream\nI0207 14:02:53.602800    1131 log.go:172] (0xc000952370) (0xc00095c000) Stream added, broadcasting: 5\nI0207 14:02:53.603928    1131 log.go:172] (0xc000952370) Reply frame received for 5\nI0207 14:02:53.784692    1131 log.go:172] (0xc000952370) Data frame received for 3\nI0207 14:02:53.784801    1131 log.go:172] (0xc0008826e0) (3) Data frame handling\nI0207 14:02:53.784844    1131 log.go:172] (0xc0008826e0) (3) Data frame sent\nI0207 14:02:53.784902    1131 log.go:172] (0xc000952370) Data frame received for 5\nI0207 14:02:53.784924    1131 log.go:172] (0xc00095c000) (5) Data frame handling\nI0207 14:02:53.784952    1131 log.go:172] (0xc00095c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0207 14:02:54.067166    1131 log.go:172] (0xc000952370) Data frame received for 1\nI0207 14:02:54.067218    1131 log.go:172] (0xc0008825a0) (1) Data frame handling\nI0207 14:02:54.067242    1131 log.go:172] (0xc0008825a0) (1) Data frame sent\nI0207 14:02:54.067638    1131 log.go:172] (0xc000952370) (0xc0008825a0) Stream removed, broadcasting: 1\nI0207 14:02:54.067769    1131 log.go:172] (0xc000952370) (0xc0008826e0) Stream removed, broadcasting: 3\nI0207 14:02:54.067948    1131 log.go:172] (0xc000952370) (0xc00095c000) Stream removed, broadcasting: 5\nI0207 14:02:54.068230    1131 log.go:172] (0xc000952370) Go away received\nI0207 14:02:54.068248    1131 log.go:172] (0xc000952370) (0xc0008825a0) Stream removed, broadcasting: 1\nI0207 14:02:54.068261    1131 log.go:172] (0xc000952370) (0xc0008826e0) Stream removed, broadcasting: 3\nI0207 14:02:54.068278    1131 log.go:172] (0xc000952370) (0xc00095c000) Stream removed, broadcasting: 5\n"
Feb  7 14:02:54.080: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 14:02:54.080: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 14:02:54.092: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:02:54.092: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:02:54.092: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  7 14:02:54.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 14:02:54.683: INFO: stderr: "I0207 14:02:54.355742    1151 log.go:172] (0xc0005e60b0) (0xc0005ca5a0) Create stream\nI0207 14:02:54.355835    1151 log.go:172] (0xc0005e60b0) (0xc0005ca5a0) Stream added, broadcasting: 1\nI0207 14:02:54.361994    1151 log.go:172] (0xc0005e60b0) Reply frame received for 1\nI0207 14:02:54.362062    1151 log.go:172] (0xc0005e60b0) (0xc0005ca640) Create stream\nI0207 14:02:54.362073    1151 log.go:172] (0xc0005e60b0) (0xc0005ca640) Stream added, broadcasting: 3\nI0207 14:02:54.367428    1151 log.go:172] (0xc0005e60b0) Reply frame received for 3\nI0207 14:02:54.367452    1151 log.go:172] (0xc0005e60b0) (0xc0005ca6e0) Create stream\nI0207 14:02:54.367457    1151 log.go:172] (0xc0005e60b0) (0xc0005ca6e0) Stream added, broadcasting: 5\nI0207 14:02:54.369515    1151 log.go:172] (0xc0005e60b0) Reply frame received for 5\nI0207 14:02:54.462725    1151 log.go:172] (0xc0005e60b0) Data frame received for 5\nI0207 14:02:54.462792    1151 log.go:172] (0xc0005ca6e0) (5) Data frame handling\nI0207 14:02:54.462809    1151 log.go:172] (0xc0005ca6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 14:02:54.468827    1151 log.go:172] (0xc0005e60b0) Data frame received for 3\nI0207 14:02:54.468847    1151 log.go:172] (0xc0005ca640) (3) Data frame handling\nI0207 14:02:54.468863    1151 log.go:172] (0xc0005ca640) (3) Data frame sent\nI0207 14:02:54.672326    1151 log.go:172] (0xc0005e60b0) (0xc0005ca640) Stream removed, broadcasting: 3\nI0207 14:02:54.672732    1151 log.go:172] (0xc0005e60b0) Data frame received for 1\nI0207 14:02:54.672903    1151 log.go:172] (0xc0005e60b0) (0xc0005ca6e0) Stream removed, broadcasting: 5\nI0207 14:02:54.672976    1151 log.go:172] (0xc0005ca5a0) (1) Data frame handling\nI0207 14:02:54.672992    1151 log.go:172] (0xc0005ca5a0) (1) Data frame sent\nI0207 14:02:54.673009    1151 log.go:172] (0xc0005e60b0) (0xc0005ca5a0) Stream removed, broadcasting: 1\nI0207 14:02:54.673050    1151 log.go:172] (0xc0005e60b0) Go away received\nI0207 14:02:54.674160    1151 log.go:172] (0xc0005e60b0) (0xc0005ca5a0) Stream removed, broadcasting: 1\nI0207 14:02:54.674223    1151 log.go:172] (0xc0005e60b0) (0xc0005ca640) Stream removed, broadcasting: 3\nI0207 14:02:54.674244    1151 log.go:172] (0xc0005e60b0) (0xc0005ca6e0) Stream removed, broadcasting: 5\n"
Feb  7 14:02:54.683: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 14:02:54.683: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 14:02:54.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 14:02:55.096: INFO: stderr: "I0207 14:02:54.872844    1164 log.go:172] (0xc0008ae0b0) (0xc0004da6e0) Create stream\nI0207 14:02:54.873227    1164 log.go:172] (0xc0008ae0b0) (0xc0004da6e0) Stream added, broadcasting: 1\nI0207 14:02:54.887928    1164 log.go:172] (0xc0008ae0b0) Reply frame received for 1\nI0207 14:02:54.888189    1164 log.go:172] (0xc0008ae0b0) (0xc00061a280) Create stream\nI0207 14:02:54.888260    1164 log.go:172] (0xc0008ae0b0) (0xc00061a280) Stream added, broadcasting: 3\nI0207 14:02:54.893345    1164 log.go:172] (0xc0008ae0b0) Reply frame received for 3\nI0207 14:02:54.893412    1164 log.go:172] (0xc0008ae0b0) (0xc0003cc000) Create stream\nI0207 14:02:54.893423    1164 log.go:172] (0xc0008ae0b0) (0xc0003cc000) Stream added, broadcasting: 5\nI0207 14:02:54.895860    1164 log.go:172] (0xc0008ae0b0) Reply frame received for 5\nI0207 14:02:54.997285    1164 log.go:172] (0xc0008ae0b0) Data frame received for 5\nI0207 14:02:54.997327    1164 log.go:172] (0xc0003cc000) (5) Data frame handling\nI0207 14:02:54.997345    1164 log.go:172] (0xc0003cc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 14:02:55.025998    1164 log.go:172] (0xc0008ae0b0) Data frame received for 3\nI0207 14:02:55.026019    1164 log.go:172] (0xc00061a280) (3) Data frame handling\nI0207 14:02:55.026031    1164 log.go:172] (0xc00061a280) (3) Data frame sent\nI0207 14:02:55.092015    1164 log.go:172] (0xc0008ae0b0) (0xc00061a280) Stream removed, broadcasting: 3\nI0207 14:02:55.092050    1164 log.go:172] (0xc0008ae0b0) Data frame received for 1\nI0207 14:02:55.092060    1164 log.go:172] (0xc0004da6e0) (1) Data frame handling\nI0207 14:02:55.092072    1164 log.go:172] (0xc0004da6e0) (1) Data frame sent\nI0207 14:02:55.092082    1164 log.go:172] (0xc0008ae0b0) (0xc0004da6e0) Stream removed, broadcasting: 1\nI0207 14:02:55.092333    1164 log.go:172] (0xc0008ae0b0) (0xc0003cc000) Stream removed, broadcasting: 5\nI0207 14:02:55.092366    1164 log.go:172] (0xc0008ae0b0) Go away received\nI0207 14:02:55.092445    1164 log.go:172] (0xc0008ae0b0) (0xc0004da6e0) Stream removed, broadcasting: 1\nI0207 14:02:55.092491    1164 log.go:172] (0xc0008ae0b0) (0xc00061a280) Stream removed, broadcasting: 3\nI0207 14:02:55.092526    1164 log.go:172] (0xc0008ae0b0) (0xc0003cc000) Stream removed, broadcasting: 5\n"
Feb  7 14:02:55.097: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 14:02:55.097: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 14:02:55.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 14:02:55.501: INFO: stderr: "I0207 14:02:55.221681    1183 log.go:172] (0xc00012cdc0) (0xc0003266e0) Create stream\nI0207 14:02:55.221737    1183 log.go:172] (0xc00012cdc0) (0xc0003266e0) Stream added, broadcasting: 1\nI0207 14:02:55.225988    1183 log.go:172] (0xc00012cdc0) Reply frame received for 1\nI0207 14:02:55.226048    1183 log.go:172] (0xc00012cdc0) (0xc0009c4000) Create stream\nI0207 14:02:55.226063    1183 log.go:172] (0xc00012cdc0) (0xc0009c4000) Stream added, broadcasting: 3\nI0207 14:02:55.226951    1183 log.go:172] (0xc00012cdc0) Reply frame received for 3\nI0207 14:02:55.226965    1183 log.go:172] (0xc00012cdc0) (0xc000326780) Create stream\nI0207 14:02:55.226970    1183 log.go:172] (0xc00012cdc0) (0xc000326780) Stream added, broadcasting: 5\nI0207 14:02:55.227944    1183 log.go:172] (0xc00012cdc0) Reply frame received for 5\nI0207 14:02:55.351921    1183 log.go:172] (0xc00012cdc0) Data frame received for 5\nI0207 14:02:55.351954    1183 log.go:172] (0xc000326780) (5) Data frame handling\nI0207 14:02:55.351970    1183 log.go:172] (0xc000326780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 14:02:55.397700    1183 log.go:172] (0xc00012cdc0) Data frame received for 3\nI0207 14:02:55.397768    1183 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0207 14:02:55.397784    1183 log.go:172] (0xc0009c4000) (3) Data frame sent\nI0207 14:02:55.493161    1183 log.go:172] (0xc00012cdc0) Data frame received for 1\nI0207 14:02:55.493382    1183 log.go:172] (0xc00012cdc0) (0xc0009c4000) Stream removed, broadcasting: 3\nI0207 14:02:55.493426    1183 log.go:172] (0xc0003266e0) (1) Data frame handling\nI0207 14:02:55.493448    1183 log.go:172] (0xc0003266e0) (1) Data frame sent\nI0207 14:02:55.493504    1183 log.go:172] (0xc00012cdc0) (0xc000326780) Stream removed, broadcasting: 5\nI0207 14:02:55.493549    1183 log.go:172] (0xc00012cdc0) (0xc0003266e0) Stream removed, broadcasting: 1\nI0207 14:02:55.493565    1183 log.go:172] (0xc00012cdc0) Go away received\nI0207 14:02:55.494004    1183 log.go:172] (0xc00012cdc0) (0xc0003266e0) Stream removed, broadcasting: 1\nI0207 14:02:55.494019    1183 log.go:172] (0xc00012cdc0) (0xc0009c4000) Stream removed, broadcasting: 3\nI0207 14:02:55.494024    1183 log.go:172] (0xc00012cdc0) (0xc000326780) Stream removed, broadcasting: 5\n"
Feb  7 14:02:55.501: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 14:02:55.501: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 14:02:55.501: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 14:02:55.507: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  7 14:03:05.551: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 14:03:05.551: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 14:03:05.551: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 14:03:05.581: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 14:03:05.581: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:05.581: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:05.581: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:05.581: INFO: 
Feb  7 14:03:05.581: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 14:03:07.498: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 14:03:07.498: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:07.498: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:07.498: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:07.498: INFO: 
Feb  7 14:03:07.498: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 14:03:08.518: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 14:03:08.518: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:08.518: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:08.518: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:08.518: INFO: 
Feb  7 14:03:08.518: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 14:03:09.919: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 14:03:09.919: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:09.919: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:09.920: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:09.920: INFO: 
Feb  7 14:03:09.920: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 14:03:10.930: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 14:03:10.930: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:10.930: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:10.930: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:10.930: INFO: 
Feb  7 14:03:10.930: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 14:03:11.941: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 14:03:11.941: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:11.941: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:11.941: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:11.941: INFO: 
Feb  7 14:03:11.941: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 14:03:12.952: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  7 14:03:12.952: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:20 +0000 UTC  }]
Feb  7 14:03:12.952: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:12.952: INFO: 
Feb  7 14:03:12.952: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  7 14:03:13.963: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  7 14:03:13.964: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:13.964: INFO: 
Feb  7 14:03:13.964: INFO: StatefulSet ss has not reached scale 0, at 1
Feb  7 14:03:14.972: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  7 14:03:14.972: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:02:41 +0000 UTC  }]
Feb  7 14:03:14.972: INFO: 
Feb  7 14:03:14.972: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6698
Feb  7 14:03:15.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:03:16.218: INFO: rc: 1
Feb  7 14:03:16.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00222d2c0 exit status 1   true [0xc0028a2518 0xc0028a2530 0xc0028a2548] [0xc0028a2518 0xc0028a2530 0xc0028a2548] [0xc0028a2528 0xc0028a2540] [0xba6c50 0xba6c50] 0xc002751260 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb  7 14:03:26.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:03:26.343: INFO: rc: 1
Feb  7 14:03:26.343: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002dbf830 exit status 1   true [0xc000678348 0xc000678370 0xc000678398] [0xc000678348 0xc000678370 0xc000678398] [0xc000678368 0xc000678388] [0xba6c50 0xba6c50] 0xc002e35b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:03:36.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:03:36.526: INFO: rc: 1
Feb  7 14:03:36.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002dbf8f0 exit status 1   true [0xc0006783a0 0xc0006783b8 0xc0006783d0] [0xc0006783a0 0xc0006783b8 0xc0006783d0] [0xc0006783b0 0xc0006783c8] [0xba6c50 0xba6c50] 0xc002e35f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:03:46.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:03:46.664: INFO: rc: 1
Feb  7 14:03:46.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002302a50 exit status 1   true [0xc002dc8770 0xc002dc8788 0xc002dc87a0] [0xc002dc8770 0xc002dc8788 0xc002dc87a0] [0xc002dc8780 0xc002dc8798] [0xba6c50 0xba6c50] 0xc002c165a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:03:56.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:03:56.800: INFO: rc: 1
Feb  7 14:03:56.800: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f860f0 exit status 1   true [0xc0000eb088 0xc0000eb308 0xc0000eb740] [0xc0000eb088 0xc0000eb308 0xc0000eb740] [0xc0000eb1e8 0xc0000eb710] [0xba6c50 0xba6c50] 0xc002784480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:04:06.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:04:06.959: INFO: rc: 1
Feb  7 14:04:06.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a88090 exit status 1   true [0xc00052e498 0xc000ba61f8 0xc000ba6470] [0xc00052e498 0xc000ba61f8 0xc000ba6470] [0xc000ba6128 0xc000ba6458] [0xba6c50 0xba6c50] 0xc000ad1200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:04:16.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:04:17.378: INFO: rc: 1
Feb  7 14:04:17.378: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a88180 exit status 1   true [0xc000ba64e0 0xc000ba65c0 0xc000ba6878] [0xc000ba64e0 0xc000ba65c0 0xc000ba6878] [0xc000ba6560 0xc000ba6840] [0xba6c50 0xba6c50] 0xc001ea7e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:04:27.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:04:27.507: INFO: rc: 1
Feb  7 14:04:27.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a88270 exit status 1   true [0xc000ba6910 0xc000ba6ad0 0xc000ba6c98] [0xc000ba6910 0xc000ba6ad0 0xc000ba6c98] [0xc000ba6a68 0xc000ba6c30] [0xba6c50 0xba6c50] 0xc002e34ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:04:37.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:04:37.674: INFO: rc: 1
Feb  7 14:04:37.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f86210 exit status 1   true [0xc0000eb750 0xc0000eb790 0xc0000eb858] [0xc0000eb750 0xc0000eb790 0xc0000eb858] [0xc0000eb780 0xc0000eb800] [0xba6c50 0xba6c50] 0xc002784cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:04:47.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:04:47.837: INFO: rc: 1
Feb  7 14:04:47.838: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028b8090 exit status 1   true [0xc0009a8058 0xc0009a83a0 0xc0009a8510] [0xc0009a8058 0xc0009a83a0 0xc0009a8510] [0xc0009a8330 0xc0009a84f0] [0xba6c50 0xba6c50] 0xc0017552c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:04:57.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:04:58.023: INFO: rc: 1
Feb  7 14:04:58.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279c0c0 exit status 1   true [0xc000678018 0xc000678058 0xc000678070] [0xc000678018 0xc000678058 0xc000678070] [0xc000678050 0xc000678068] [0xba6c50 0xba6c50] 0xc002502420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:05:08.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:05:08.379: INFO: rc: 1
Feb  7 14:05:08.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f86300 exit status 1   true [0xc0000eb8c8 0xc0000eb938 0xc0000eba90] [0xc0000eb8c8 0xc0000eb938 0xc0000eba90] [0xc0000eb910 0xc0000eb9b0] [0xba6c50 0xba6c50] 0xc002785140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:05:18.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:05:18.574: INFO: rc: 1
Feb  7 14:05:18.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279c210 exit status 1   true [0xc000678080 0xc0006780a0 0xc0006780c0] [0xc000678080 0xc0006780a0 0xc0006780c0] [0xc000678098 0xc0006780b0] [0xba6c50 0xba6c50] 0xc002502ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:05:28.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:05:28.711: INFO: rc: 1
Feb  7 14:05:28.712: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f86450 exit status 1   true [0xc0000ebb28 0xc0000ebc58 0xc0000ebd28] [0xc0000ebb28 0xc0000ebc58 0xc0000ebd28] [0xc0000ebc40 0xc0000ebcf8] [0xba6c50 0xba6c50] 0xc0027854a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:05:38.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:05:38.832: INFO: rc: 1
Feb  7 14:05:38.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028b8180 exit status 1   true [0xc0009a8528 0xc0009a89f8 0xc0009a8b98] [0xc0009a8528 0xc0009a89f8 0xc0009a8b98] [0xc0009a89b0 0xc0009a8b40] [0xba6c50 0xba6c50] 0xc001755680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:05:48.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:05:49.022: INFO: rc: 1
Feb  7 14:05:49.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a88360 exit status 1   true [0xc000ba6d48 0xc000ba6e20 0xc000ba7030] [0xc000ba6d48 0xc000ba6e20 0xc000ba7030] [0xc000ba6e00 0xc000ba6f20] [0xba6c50 0xba6c50] 0xc002e34ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:05:59.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:05:59.190: INFO: rc: 1
Feb  7 14:05:59.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028b80c0 exit status 1   true [0xc0009a8058 0xc0009a83a0 0xc0009a8510] [0xc0009a8058 0xc0009a83a0 0xc0009a8510] [0xc0009a8330 0xc0009a84f0] [0xba6c50 0xba6c50] 0xc001ea7e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:06:09.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:06:09.339: INFO: rc: 1
Feb  7 14:06:09.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a880f0 exit status 1   true [0xc000ba6048 0xc000ba6288 0xc000ba64e0] [0xc000ba6048 0xc000ba6288 0xc000ba64e0] [0xc000ba61f8 0xc000ba6470] [0xba6c50 0xba6c50] 0xc001360540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:06:19.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:06:19.460: INFO: rc: 1
Feb  7 14:06:19.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a881e0 exit status 1   true [0xc000ba6550 0xc000ba66d0 0xc000ba6910] [0xc000ba6550 0xc000ba66d0 0xc000ba6910] [0xc000ba65c0 0xc000ba6878] [0xba6c50 0xba6c50] 0xc001754780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:06:29.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:06:29.625: INFO: rc: 1
Feb  7 14:06:29.625: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279c090 exit status 1   true [0xc000678018 0xc000678058 0xc000678070] [0xc000678018 0xc000678058 0xc000678070] [0xc000678050 0xc000678068] [0xba6c50 0xba6c50] 0xc002e34d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:06:39.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:06:39.768: INFO: rc: 1
Feb  7 14:06:39.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028b81e0 exit status 1   true [0xc0009a8528 0xc0009a89f8 0xc0009a8b98] [0xc0009a8528 0xc0009a89f8 0xc0009a8b98] [0xc0009a89b0 0xc0009a8b40] [0xba6c50 0xba6c50] 0xc002502360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:06:49.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:06:49.945: INFO: rc: 1
Feb  7 14:06:49.945: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279c1b0 exit status 1   true [0xc000678080 0xc0006780a0 0xc0006780c0] [0xc000678080 0xc0006780a0 0xc0006780c0] [0xc000678098 0xc0006780b0] [0xba6c50 0xba6c50] 0xc002e35020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:06:59.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:07:00.085: INFO: rc: 1
Feb  7 14:07:00.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f86150 exit status 1   true [0xc0000ea4b0 0xc0000eb1e8 0xc0000eb710] [0xc0000ea4b0 0xc0000eb1e8 0xc0000eb710] [0xc0000eb118 0xc0000eb390] [0xba6c50 0xba6c50] 0xc002784480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:07:10.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:07:10.252: INFO: rc: 1
Feb  7 14:07:10.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f862a0 exit status 1   true [0xc0000eb740 0xc0000eb780 0xc0000eb800] [0xc0000eb740 0xc0000eb780 0xc0000eb800] [0xc0000eb760 0xc0000eb7b0] [0xba6c50 0xba6c50] 0xc002784cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:07:20.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:07:20.383: INFO: rc: 1
Feb  7 14:07:20.383: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279c2a0 exit status 1   true [0xc0006780d0 0xc0006780e8 0xc000678100] [0xc0006780d0 0xc0006780e8 0xc000678100] [0xc0006780e0 0xc0006780f8] [0xba6c50 0xba6c50] 0xc002e355c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:07:30.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:07:30.602: INFO: rc: 1
Feb  7 14:07:30.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279c390 exit status 1   true [0xc000678108 0xc000678120 0xc000678138] [0xc000678108 0xc000678120 0xc000678138] [0xc000678118 0xc000678130] [0xba6c50 0xba6c50] 0xc002e358c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:07:40.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:07:40.714: INFO: rc: 1
Feb  7 14:07:40.714: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028b8300 exit status 1   true [0xc0009a8bc8 0xc0009a8ca8 0xc0009a8de8] [0xc0009a8bc8 0xc0009a8ca8 0xc0009a8de8] [0xc0009a8c58 0xc0009a8d40] [0xba6c50 0xba6c50] 0xc002502a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:07:50.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:07:50.869: INFO: rc: 1
Feb  7 14:07:50.869: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a88300 exit status 1   true [0xc000ba6990 0xc000ba6bc8 0xc000ba6d48] [0xc000ba6990 0xc000ba6bc8 0xc000ba6d48] [0xc000ba6ad0 0xc000ba6c98] [0xba6c50 0xba6c50] 0xc001755500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:08:00.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:08:01.025: INFO: rc: 1
Feb  7 14:08:01.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a88090 exit status 1   true [0xc000ba6048 0xc000ba6288 0xc000ba64e0] [0xc000ba6048 0xc000ba6288 0xc000ba64e0] [0xc000ba61f8 0xc000ba6470] [0xba6c50 0xba6c50] 0xc000ad1200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:08:11.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:08:11.232: INFO: rc: 1
Feb  7 14:08:11.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028b8090 exit status 1   true [0xc0009a8058 0xc0009a83a0 0xc0009a8510] [0xc0009a8058 0xc0009a83a0 0xc0009a8510] [0xc0009a8330 0xc0009a84f0] [0xba6c50 0xba6c50] 0xc001ea7e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb  7 14:08:21.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6698 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:08:21.382: INFO: rc: 1
Feb  7 14:08:21.383: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb  7 14:08:21.383: INFO: Scaling statefulset ss to 0
Feb  7 14:08:21.400: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  7 14:08:21.403: INFO: Deleting all statefulset in ns statefulset-6698
Feb  7 14:08:21.411: INFO: Scaling statefulset ss to 0
Feb  7 14:08:21.433: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 14:08:21.436: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:08:21.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6698" for this suite.
Feb  7 14:08:27.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:08:27.803: INFO: namespace statefulset-6698 deletion completed in 6.198730858s

• [SLOW TEST:367.184 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:08:27.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  7 14:08:36.559: INFO: Successfully updated pod "labelsupdate066a69f6-3c69-4dfa-8081-4035a07a4df9"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:08:40.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7152" for this suite.
Feb  7 14:09:02.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:09:02.845: INFO: namespace downward-api-7152 deletion completed in 22.127940712s

• [SLOW TEST:35.042 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:09:02.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-2983
I0207 14:09:02.967807       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2983, replica count: 1
I0207 14:09:04.018303       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 14:09:05.018611       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 14:09:06.018893       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 14:09:07.019173       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 14:09:08.019404       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 14:09:09.019726       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 14:09:10.020129       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 14:09:10.175: INFO: Created: latency-svc-hgldf
Feb  7 14:09:10.189: INFO: Got endpoints: latency-svc-hgldf [68.69527ms]
Feb  7 14:09:10.316: INFO: Created: latency-svc-z5drd
Feb  7 14:09:10.331: INFO: Got endpoints: latency-svc-z5drd [141.195032ms]
Feb  7 14:09:10.387: INFO: Created: latency-svc-4sw6t
Feb  7 14:09:10.399: INFO: Got endpoints: latency-svc-4sw6t [210.329578ms]
Feb  7 14:09:10.544: INFO: Created: latency-svc-gsrt8
Feb  7 14:09:10.558: INFO: Got endpoints: latency-svc-gsrt8 [368.475604ms]
Feb  7 14:09:10.596: INFO: Created: latency-svc-56lw8
Feb  7 14:09:10.613: INFO: Got endpoints: latency-svc-56lw8 [423.12191ms]
Feb  7 14:09:10.754: INFO: Created: latency-svc-f7t4f
Feb  7 14:09:10.788: INFO: Got endpoints: latency-svc-f7t4f [598.039212ms]
Feb  7 14:09:10.963: INFO: Created: latency-svc-jm4hc
Feb  7 14:09:10.969: INFO: Got endpoints: latency-svc-jm4hc [779.061133ms]
Feb  7 14:09:11.193: INFO: Created: latency-svc-h5pgj
Feb  7 14:09:11.194: INFO: Got endpoints: latency-svc-h5pgj [1.004385582s]
Feb  7 14:09:11.258: INFO: Created: latency-svc-l7jzt
Feb  7 14:09:11.354: INFO: Got endpoints: latency-svc-l7jzt [1.164548232s]
Feb  7 14:09:11.387: INFO: Created: latency-svc-jd2ms
Feb  7 14:09:11.392: INFO: Got endpoints: latency-svc-jd2ms [1.202822153s]
Feb  7 14:09:11.446: INFO: Created: latency-svc-rttnc
Feb  7 14:09:11.533: INFO: Got endpoints: latency-svc-rttnc [179.444344ms]
Feb  7 14:09:11.567: INFO: Created: latency-svc-qcm5m
Feb  7 14:09:11.613: INFO: Got endpoints: latency-svc-qcm5m [1.423564539s]
Feb  7 14:09:11.717: INFO: Created: latency-svc-6g5ff
Feb  7 14:09:11.730: INFO: Got endpoints: latency-svc-6g5ff [1.540819936s]
Feb  7 14:09:11.773: INFO: Created: latency-svc-ggpwj
Feb  7 14:09:11.817: INFO: Created: latency-svc-w2fc5
Feb  7 14:09:11.931: INFO: Got endpoints: latency-svc-ggpwj [1.741086524s]
Feb  7 14:09:11.960: INFO: Got endpoints: latency-svc-w2fc5 [1.769841413s]
Feb  7 14:09:11.969: INFO: Created: latency-svc-l4w69
Feb  7 14:09:11.977: INFO: Got endpoints: latency-svc-l4w69 [1.787004236s]
Feb  7 14:09:12.097: INFO: Created: latency-svc-r4qvq
Feb  7 14:09:12.109: INFO: Got endpoints: latency-svc-r4qvq [1.919154887s]
Feb  7 14:09:12.151: INFO: Created: latency-svc-5rh9q
Feb  7 14:09:12.169: INFO: Got endpoints: latency-svc-5rh9q [1.83745054s]
Feb  7 14:09:12.276: INFO: Created: latency-svc-fkpnn
Feb  7 14:09:12.276: INFO: Got endpoints: latency-svc-fkpnn [1.876719042s]
Feb  7 14:09:12.323: INFO: Created: latency-svc-jnl8k
Feb  7 14:09:12.334: INFO: Got endpoints: latency-svc-jnl8k [1.775941613s]
Feb  7 14:09:12.428: INFO: Created: latency-svc-xh5js
Feb  7 14:09:12.483: INFO: Got endpoints: latency-svc-xh5js [1.869386417s]
Feb  7 14:09:12.489: INFO: Created: latency-svc-wq28v
Feb  7 14:09:12.502: INFO: Got endpoints: latency-svc-wq28v [1.714472304s]
Feb  7 14:09:12.687: INFO: Created: latency-svc-rctd7
Feb  7 14:09:12.705: INFO: Got endpoints: latency-svc-rctd7 [1.736285072s]
Feb  7 14:09:12.821: INFO: Created: latency-svc-vjrz4
Feb  7 14:09:12.838: INFO: Got endpoints: latency-svc-vjrz4 [1.643434278s]
Feb  7 14:09:12.985: INFO: Created: latency-svc-hvvhz
Feb  7 14:09:13.004: INFO: Got endpoints: latency-svc-hvvhz [1.611880776s]
Feb  7 14:09:13.057: INFO: Created: latency-svc-x42cg
Feb  7 14:09:13.057: INFO: Got endpoints: latency-svc-x42cg [1.523974637s]
Feb  7 14:09:13.177: INFO: Created: latency-svc-rnxvq
Feb  7 14:09:13.187: INFO: Got endpoints: latency-svc-rnxvq [1.573198986s]
Feb  7 14:09:13.260: INFO: Created: latency-svc-9k8jg
Feb  7 14:09:13.344: INFO: Got endpoints: latency-svc-9k8jg [1.612961511s]
Feb  7 14:09:13.416: INFO: Created: latency-svc-54stz
Feb  7 14:09:13.424: INFO: Got endpoints: latency-svc-54stz [1.492780164s]
Feb  7 14:09:13.526: INFO: Created: latency-svc-qr272
Feb  7 14:09:13.535: INFO: Got endpoints: latency-svc-qr272 [1.574758523s]
Feb  7 14:09:13.580: INFO: Created: latency-svc-jlkc6
Feb  7 14:09:13.603: INFO: Got endpoints: latency-svc-jlkc6 [1.625830376s]
Feb  7 14:09:13.732: INFO: Created: latency-svc-2ms6k
Feb  7 14:09:13.794: INFO: Got endpoints: latency-svc-2ms6k [1.684514781s]
Feb  7 14:09:13.803: INFO: Created: latency-svc-vp882
Feb  7 14:09:13.922: INFO: Got endpoints: latency-svc-vp882 [1.75262087s]
Feb  7 14:09:13.997: INFO: Created: latency-svc-b98hr
Feb  7 14:09:13.998: INFO: Got endpoints: latency-svc-b98hr [1.721860308s]
Feb  7 14:09:14.161: INFO: Created: latency-svc-75g7x
Feb  7 14:09:14.186: INFO: Got endpoints: latency-svc-75g7x [1.851669497s]
Feb  7 14:09:14.370: INFO: Created: latency-svc-swlmm
Feb  7 14:09:14.382: INFO: Got endpoints: latency-svc-swlmm [1.899139844s]
Feb  7 14:09:14.541: INFO: Created: latency-svc-9swlt
Feb  7 14:09:14.564: INFO: Got endpoints: latency-svc-9swlt [2.061200095s]
Feb  7 14:09:14.626: INFO: Created: latency-svc-6hjj9
Feb  7 14:09:14.628: INFO: Got endpoints: latency-svc-6hjj9 [1.922032893s]
Feb  7 14:09:14.763: INFO: Created: latency-svc-dlx24
Feb  7 14:09:14.780: INFO: Got endpoints: latency-svc-dlx24 [1.94126508s]
Feb  7 14:09:14.842: INFO: Created: latency-svc-zww4f
Feb  7 14:09:14.968: INFO: Got endpoints: latency-svc-zww4f [1.962988782s]
Feb  7 14:09:15.008: INFO: Created: latency-svc-mpgxf
Feb  7 14:09:15.019: INFO: Got endpoints: latency-svc-mpgxf [1.961195206s]
Feb  7 14:09:15.073: INFO: Created: latency-svc-dckbd
Feb  7 14:09:15.146: INFO: Got endpoints: latency-svc-dckbd [1.959514134s]
Feb  7 14:09:15.198: INFO: Created: latency-svc-xw42g
Feb  7 14:09:15.202: INFO: Got endpoints: latency-svc-xw42g [1.858154994s]
Feb  7 14:09:15.244: INFO: Created: latency-svc-nxmbx
Feb  7 14:09:15.330: INFO: Got endpoints: latency-svc-nxmbx [1.906103052s]
Feb  7 14:09:15.360: INFO: Created: latency-svc-8s2k7
Feb  7 14:09:15.377: INFO: Got endpoints: latency-svc-8s2k7 [1.842188736s]
Feb  7 14:09:15.407: INFO: Created: latency-svc-fqbvc
Feb  7 14:09:15.417: INFO: Got endpoints: latency-svc-fqbvc [1.813731144s]
Feb  7 14:09:15.501: INFO: Created: latency-svc-k5dp6
Feb  7 14:09:15.512: INFO: Got endpoints: latency-svc-k5dp6 [1.717648219s]
Feb  7 14:09:15.559: INFO: Created: latency-svc-lqrdx
Feb  7 14:09:15.561: INFO: Got endpoints: latency-svc-lqrdx [1.639245965s]
Feb  7 14:09:15.674: INFO: Created: latency-svc-jk8jn
Feb  7 14:09:15.692: INFO: Got endpoints: latency-svc-jk8jn [1.693777399s]
Feb  7 14:09:15.759: INFO: Created: latency-svc-6lptc
Feb  7 14:09:15.846: INFO: Got endpoints: latency-svc-6lptc [1.659992373s]
Feb  7 14:09:15.899: INFO: Created: latency-svc-dczf2
Feb  7 14:09:15.931: INFO: Got endpoints: latency-svc-dczf2 [1.549163568s]
Feb  7 14:09:16.120: INFO: Created: latency-svc-ctw7j
Feb  7 14:09:16.132: INFO: Got endpoints: latency-svc-ctw7j [1.567709488s]
Feb  7 14:09:16.198: INFO: Created: latency-svc-9pwpt
Feb  7 14:09:16.205: INFO: Got endpoints: latency-svc-9pwpt [1.577426909s]
Feb  7 14:09:16.371: INFO: Created: latency-svc-678lx
Feb  7 14:09:16.386: INFO: Got endpoints: latency-svc-678lx [1.605646968s]
Feb  7 14:09:16.578: INFO: Created: latency-svc-7sv9m
Feb  7 14:09:16.751: INFO: Got endpoints: latency-svc-7sv9m [1.782822801s]
Feb  7 14:09:16.752: INFO: Created: latency-svc-5tdp4
Feb  7 14:09:16.783: INFO: Got endpoints: latency-svc-5tdp4 [1.763878268s]
Feb  7 14:09:16.956: INFO: Created: latency-svc-v55r4
Feb  7 14:09:16.971: INFO: Got endpoints: latency-svc-v55r4 [1.824078985s]
Feb  7 14:09:17.063: INFO: Created: latency-svc-wpqh7
Feb  7 14:09:17.161: INFO: Got endpoints: latency-svc-wpqh7 [1.958710502s]
Feb  7 14:09:17.193: INFO: Created: latency-svc-4hs6c
Feb  7 14:09:17.210: INFO: Got endpoints: latency-svc-4hs6c [1.879885253s]
Feb  7 14:09:17.247: INFO: Created: latency-svc-pkdc7
Feb  7 14:09:17.254: INFO: Got endpoints: latency-svc-pkdc7 [1.877012153s]
Feb  7 14:09:17.358: INFO: Created: latency-svc-lp649
Feb  7 14:09:17.374: INFO: Got endpoints: latency-svc-lp649 [1.956910563s]
Feb  7 14:09:17.428: INFO: Created: latency-svc-m9rpd
Feb  7 14:09:17.491: INFO: Got endpoints: latency-svc-m9rpd [1.979490129s]
Feb  7 14:09:17.526: INFO: Created: latency-svc-pts27
Feb  7 14:09:17.533: INFO: Got endpoints: latency-svc-pts27 [1.971666368s]
Feb  7 14:09:17.574: INFO: Created: latency-svc-wcknx
Feb  7 14:09:17.585: INFO: Got endpoints: latency-svc-wcknx [1.892656924s]
Feb  7 14:09:17.720: INFO: Created: latency-svc-jvkwn
Feb  7 14:09:17.735: INFO: Got endpoints: latency-svc-jvkwn [1.889015744s]
Feb  7 14:09:17.773: INFO: Created: latency-svc-sq4z6
Feb  7 14:09:17.877: INFO: Got endpoints: latency-svc-sq4z6 [1.945086769s]
Feb  7 14:09:17.971: INFO: Created: latency-svc-bh42w
Feb  7 14:09:18.066: INFO: Got endpoints: latency-svc-bh42w [1.933653728s]
Feb  7 14:09:18.124: INFO: Created: latency-svc-msmz6
Feb  7 14:09:18.135: INFO: Got endpoints: latency-svc-msmz6 [1.929681858s]
Feb  7 14:09:18.246: INFO: Created: latency-svc-lgxsn
Feb  7 14:09:18.256: INFO: Got endpoints: latency-svc-lgxsn [1.869672859s]
Feb  7 14:09:18.331: INFO: Created: latency-svc-7c4mp
Feb  7 14:09:18.331: INFO: Got endpoints: latency-svc-7c4mp [1.580390352s]
Feb  7 14:09:18.413: INFO: Created: latency-svc-tbhlq
Feb  7 14:09:18.421: INFO: Got endpoints: latency-svc-tbhlq [1.638296469s]
Feb  7 14:09:18.474: INFO: Created: latency-svc-26f9d
Feb  7 14:09:18.476: INFO: Got endpoints: latency-svc-26f9d [1.505316533s]
Feb  7 14:09:18.573: INFO: Created: latency-svc-vqzcp
Feb  7 14:09:18.790: INFO: Got endpoints: latency-svc-vqzcp [1.629389521s]
Feb  7 14:09:18.797: INFO: Created: latency-svc-kxlkp
Feb  7 14:09:18.807: INFO: Got endpoints: latency-svc-kxlkp [1.596256048s]
Feb  7 14:09:18.857: INFO: Created: latency-svc-nxtzf
Feb  7 14:09:19.048: INFO: Got endpoints: latency-svc-nxtzf [1.794076345s]
Feb  7 14:09:19.052: INFO: Created: latency-svc-xjhcg
Feb  7 14:09:19.052: INFO: Got endpoints: latency-svc-xjhcg [1.677545368s]
Feb  7 14:09:19.121: INFO: Created: latency-svc-zgk7w
Feb  7 14:09:19.122: INFO: Got endpoints: latency-svc-zgk7w [1.630626973s]
Feb  7 14:09:19.249: INFO: Created: latency-svc-626vq
Feb  7 14:09:19.286: INFO: Got endpoints: latency-svc-626vq [1.752369181s]
Feb  7 14:09:19.289: INFO: Created: latency-svc-c9kl5
Feb  7 14:09:19.301: INFO: Got endpoints: latency-svc-c9kl5 [1.715234798s]
Feb  7 14:09:19.414: INFO: Created: latency-svc-j8jc4
Feb  7 14:09:19.422: INFO: Got endpoints: latency-svc-j8jc4 [1.686000077s]
Feb  7 14:09:19.452: INFO: Created: latency-svc-lqw8q
Feb  7 14:09:19.454: INFO: Got endpoints: latency-svc-lqw8q [1.577454912s]
Feb  7 14:09:19.581: INFO: Created: latency-svc-m279z
Feb  7 14:09:19.616: INFO: Got endpoints: latency-svc-m279z [1.549410999s]
Feb  7 14:09:19.617: INFO: Created: latency-svc-slz2w
Feb  7 14:09:19.627: INFO: Got endpoints: latency-svc-slz2w [1.492024407s]
Feb  7 14:09:19.733: INFO: Created: latency-svc-vvk8l
Feb  7 14:09:19.808: INFO: Got endpoints: latency-svc-vvk8l [1.552735699s]
Feb  7 14:09:19.816: INFO: Created: latency-svc-ln82v
Feb  7 14:09:19.901: INFO: Got endpoints: latency-svc-ln82v [1.570001867s]
Feb  7 14:09:19.983: INFO: Created: latency-svc-rzqtd
Feb  7 14:09:19.985: INFO: Got endpoints: latency-svc-rzqtd [1.563345842s]
Feb  7 14:09:20.165: INFO: Created: latency-svc-s9z5n
Feb  7 14:09:20.175: INFO: Got endpoints: latency-svc-s9z5n [1.698566033s]
Feb  7 14:09:20.217: INFO: Created: latency-svc-lxwl2
Feb  7 14:09:20.230: INFO: Got endpoints: latency-svc-lxwl2 [1.439282663s]
Feb  7 14:09:20.358: INFO: Created: latency-svc-zzvp4
Feb  7 14:09:20.384: INFO: Got endpoints: latency-svc-zzvp4 [1.57665298s]
Feb  7 14:09:20.433: INFO: Created: latency-svc-dvxxt
Feb  7 14:09:20.436: INFO: Got endpoints: latency-svc-dvxxt [1.387796123s]
Feb  7 14:09:20.585: INFO: Created: latency-svc-pw6kd
Feb  7 14:09:20.634: INFO: Got endpoints: latency-svc-pw6kd [1.582180474s]
Feb  7 14:09:20.641: INFO: Created: latency-svc-k6c69
Feb  7 14:09:20.884: INFO: Got endpoints: latency-svc-k6c69 [1.76175408s]
Feb  7 14:09:20.898: INFO: Created: latency-svc-vm9f2
Feb  7 14:09:20.926: INFO: Got endpoints: latency-svc-vm9f2 [1.640082387s]
Feb  7 14:09:21.155: INFO: Created: latency-svc-mtp8q
Feb  7 14:09:21.160: INFO: Got endpoints: latency-svc-mtp8q [1.858767627s]
Feb  7 14:09:21.215: INFO: Created: latency-svc-rgvst
Feb  7 14:09:21.221: INFO: Got endpoints: latency-svc-rgvst [1.799153028s]
Feb  7 14:09:21.334: INFO: Created: latency-svc-w9htj
Feb  7 14:09:21.344: INFO: Got endpoints: latency-svc-w9htj [1.889098788s]
Feb  7 14:09:21.400: INFO: Created: latency-svc-gzqmr
Feb  7 14:09:21.409: INFO: Got endpoints: latency-svc-gzqmr [1.792674138s]
Feb  7 14:09:21.563: INFO: Created: latency-svc-q8xj6
Feb  7 14:09:21.582: INFO: Got endpoints: latency-svc-q8xj6 [1.955185676s]
Feb  7 14:09:21.592: INFO: Created: latency-svc-pr4cx
Feb  7 14:09:21.726: INFO: Created: latency-svc-zptq6
Feb  7 14:09:21.746: INFO: Got endpoints: latency-svc-pr4cx [1.93747355s]
Feb  7 14:09:21.748: INFO: Got endpoints: latency-svc-zptq6 [1.846082718s]
Feb  7 14:09:21.799: INFO: Created: latency-svc-bqvs7
Feb  7 14:09:21.814: INFO: Got endpoints: latency-svc-bqvs7 [1.82961833s]
Feb  7 14:09:21.952: INFO: Created: latency-svc-lqxsd
Feb  7 14:09:22.001: INFO: Got endpoints: latency-svc-lqxsd [1.826426169s]
Feb  7 14:09:22.007: INFO: Created: latency-svc-nf82q
Feb  7 14:09:22.023: INFO: Got endpoints: latency-svc-nf82q [1.793536279s]
Feb  7 14:09:22.157: INFO: Created: latency-svc-q7prk
Feb  7 14:09:22.167: INFO: Got endpoints: latency-svc-q7prk [1.783218018s]
Feb  7 14:09:22.249: INFO: Created: latency-svc-zzzdl
Feb  7 14:09:22.338: INFO: Got endpoints: latency-svc-zzzdl [1.901577906s]
Feb  7 14:09:22.398: INFO: Created: latency-svc-t6wzx
Feb  7 14:09:22.424: INFO: Got endpoints: latency-svc-t6wzx [1.790141729s]
Feb  7 14:09:22.561: INFO: Created: latency-svc-msc8s
Feb  7 14:09:22.581: INFO: Got endpoints: latency-svc-msc8s [1.696797709s]
Feb  7 14:09:22.629: INFO: Created: latency-svc-wlx8d
Feb  7 14:09:22.725: INFO: Got endpoints: latency-svc-wlx8d [1.79942237s]
Feb  7 14:09:22.805: INFO: Created: latency-svc-qvdr5
Feb  7 14:09:22.812: INFO: Got endpoints: latency-svc-qvdr5 [1.652667989s]
Feb  7 14:09:22.834: INFO: Created: latency-svc-r7wv7
Feb  7 14:09:22.931: INFO: Got endpoints: latency-svc-r7wv7 [1.70955784s]
Feb  7 14:09:22.973: INFO: Created: latency-svc-jxf55
Feb  7 14:09:23.139: INFO: Got endpoints: latency-svc-jxf55 [1.795371444s]
Feb  7 14:09:23.179: INFO: Created: latency-svc-kr59x
Feb  7 14:09:23.197: INFO: Got endpoints: latency-svc-kr59x [1.788274898s]
Feb  7 14:09:23.360: INFO: Created: latency-svc-fmw7p
Feb  7 14:09:23.372: INFO: Got endpoints: latency-svc-fmw7p [1.78942524s]
Feb  7 14:09:23.423: INFO: Created: latency-svc-kcrzn
Feb  7 14:09:23.423: INFO: Got endpoints: latency-svc-kcrzn [1.675403011s]
Feb  7 14:09:23.564: INFO: Created: latency-svc-57vf8
Feb  7 14:09:23.573: INFO: Got endpoints: latency-svc-57vf8 [1.826641255s]
Feb  7 14:09:23.629: INFO: Created: latency-svc-9b5dd
Feb  7 14:09:23.631: INFO: Got endpoints: latency-svc-9b5dd [1.81632098s]
Feb  7 14:09:23.814: INFO: Created: latency-svc-d4np6
Feb  7 14:09:23.818: INFO: Got endpoints: latency-svc-d4np6 [1.816048575s]
Feb  7 14:09:24.000: INFO: Created: latency-svc-js9qz
Feb  7 14:09:24.018: INFO: Got endpoints: latency-svc-js9qz [1.994541267s]
Feb  7 14:09:24.076: INFO: Created: latency-svc-hv6gw
Feb  7 14:09:24.092: INFO: Got endpoints: latency-svc-hv6gw [1.924813795s]
Feb  7 14:09:24.246: INFO: Created: latency-svc-4ltrb
Feb  7 14:09:24.293: INFO: Got endpoints: latency-svc-4ltrb [1.955112517s]
Feb  7 14:09:24.295: INFO: Created: latency-svc-jwng5
Feb  7 14:09:24.306: INFO: Got endpoints: latency-svc-jwng5 [1.882018957s]
Feb  7 14:09:24.430: INFO: Created: latency-svc-bffd4
Feb  7 14:09:24.430: INFO: Got endpoints: latency-svc-bffd4 [1.84896127s]
Feb  7 14:09:24.476: INFO: Created: latency-svc-2cgtz
Feb  7 14:09:24.496: INFO: Got endpoints: latency-svc-2cgtz [1.77032671s]
Feb  7 14:09:24.618: INFO: Created: latency-svc-sclq8
Feb  7 14:09:24.635: INFO: Got endpoints: latency-svc-sclq8 [1.822463213s]
Feb  7 14:09:24.680: INFO: Created: latency-svc-pbpn8
Feb  7 14:09:24.692: INFO: Got endpoints: latency-svc-pbpn8 [1.761288776s]
Feb  7 14:09:24.797: INFO: Created: latency-svc-5kmp8
Feb  7 14:09:24.802: INFO: Got endpoints: latency-svc-5kmp8 [1.662299485s]
Feb  7 14:09:25.005: INFO: Created: latency-svc-khz6l
Feb  7 14:09:25.005: INFO: Got endpoints: latency-svc-khz6l [1.807990134s]
Feb  7 14:09:25.063: INFO: Created: latency-svc-9gk5b
Feb  7 14:09:25.069: INFO: Got endpoints: latency-svc-9gk5b [1.696362627s]
Feb  7 14:09:25.193: INFO: Created: latency-svc-p5gqh
Feb  7 14:09:25.200: INFO: Got endpoints: latency-svc-p5gqh [1.776144296s]
Feb  7 14:09:25.266: INFO: Created: latency-svc-6v9jv
Feb  7 14:09:25.407: INFO: Created: latency-svc-hlwhq
Feb  7 14:09:25.407: INFO: Got endpoints: latency-svc-6v9jv [1.834038857s]
Feb  7 14:09:25.418: INFO: Got endpoints: latency-svc-hlwhq [1.786811445s]
Feb  7 14:09:25.455: INFO: Created: latency-svc-x567f
Feb  7 14:09:25.466: INFO: Got endpoints: latency-svc-x567f [1.648232147s]
Feb  7 14:09:25.603: INFO: Created: latency-svc-7jbxj
Feb  7 14:09:25.634: INFO: Got endpoints: latency-svc-7jbxj [1.615428578s]
Feb  7 14:09:25.687: INFO: Created: latency-svc-29mcx
Feb  7 14:09:25.697: INFO: Got endpoints: latency-svc-29mcx [1.605290999s]
Feb  7 14:09:25.797: INFO: Created: latency-svc-pjk5d
Feb  7 14:09:25.822: INFO: Got endpoints: latency-svc-pjk5d [1.528359152s]
Feb  7 14:09:25.872: INFO: Created: latency-svc-clx2c
Feb  7 14:09:25.887: INFO: Got endpoints: latency-svc-clx2c [1.580386651s]
Feb  7 14:09:26.159: INFO: Created: latency-svc-fk2kn
Feb  7 14:09:26.279: INFO: Got endpoints: latency-svc-fk2kn [1.848322142s]
Feb  7 14:09:26.294: INFO: Created: latency-svc-jw7j9
Feb  7 14:09:26.334: INFO: Got endpoints: latency-svc-jw7j9 [1.837594895s]
Feb  7 14:09:26.489: INFO: Created: latency-svc-jjm2p
Feb  7 14:09:26.587: INFO: Got endpoints: latency-svc-jjm2p [1.951810216s]
Feb  7 14:09:26.588: INFO: Created: latency-svc-5v44t
Feb  7 14:09:26.708: INFO: Got endpoints: latency-svc-5v44t [2.014821988s]
Feb  7 14:09:26.759: INFO: Created: latency-svc-w8bsw
Feb  7 14:09:26.873: INFO: Got endpoints: latency-svc-w8bsw [2.071782229s]
Feb  7 14:09:26.955: INFO: Created: latency-svc-szx2c
Feb  7 14:09:26.956: INFO: Got endpoints: latency-svc-szx2c [1.950858057s]
Feb  7 14:09:27.190: INFO: Created: latency-svc-xxwk9
Feb  7 14:09:27.272: INFO: Got endpoints: latency-svc-xxwk9 [2.203595225s]
Feb  7 14:09:27.276: INFO: Created: latency-svc-tzmr5
Feb  7 14:09:27.284: INFO: Got endpoints: latency-svc-tzmr5 [2.084449018s]
Feb  7 14:09:27.474: INFO: Created: latency-svc-m2q9n
Feb  7 14:09:27.490: INFO: Got endpoints: latency-svc-m2q9n [2.082883596s]
Feb  7 14:09:27.528: INFO: Created: latency-svc-9rgh7
Feb  7 14:09:27.642: INFO: Created: latency-svc-ms2sb
Feb  7 14:09:27.645: INFO: Got endpoints: latency-svc-9rgh7 [2.227619885s]
Feb  7 14:09:27.711: INFO: Created: latency-svc-f85qx
Feb  7 14:09:27.714: INFO: Got endpoints: latency-svc-ms2sb [2.247435799s]
Feb  7 14:09:27.789: INFO: Got endpoints: latency-svc-f85qx [2.155167434s]
Feb  7 14:09:27.817: INFO: Created: latency-svc-746xw
Feb  7 14:09:27.834: INFO: Got endpoints: latency-svc-746xw [2.136758461s]
Feb  7 14:09:27.873: INFO: Created: latency-svc-8xnhv
Feb  7 14:09:27.955: INFO: Got endpoints: latency-svc-8xnhv [2.133485638s]
Feb  7 14:09:27.990: INFO: Created: latency-svc-22ml5
Feb  7 14:09:28.003: INFO: Got endpoints: latency-svc-22ml5 [2.116123476s]
Feb  7 14:09:28.056: INFO: Created: latency-svc-bkjgs
Feb  7 14:09:28.170: INFO: Got endpoints: latency-svc-bkjgs [1.891621527s]
Feb  7 14:09:28.181: INFO: Created: latency-svc-5q8cq
Feb  7 14:09:28.186: INFO: Got endpoints: latency-svc-5q8cq [1.85172917s]
Feb  7 14:09:28.233: INFO: Created: latency-svc-nxnsl
Feb  7 14:09:28.242: INFO: Got endpoints: latency-svc-nxnsl [1.65488927s]
Feb  7 14:09:28.383: INFO: Created: latency-svc-rqdwv
Feb  7 14:09:28.389: INFO: Got endpoints: latency-svc-rqdwv [1.680573019s]
Feb  7 14:09:28.422: INFO: Created: latency-svc-4bcv5
Feb  7 14:09:28.433: INFO: Got endpoints: latency-svc-4bcv5 [1.558895168s]
Feb  7 14:09:28.566: INFO: Created: latency-svc-v6cfv
Feb  7 14:09:28.576: INFO: Got endpoints: latency-svc-v6cfv [1.619427439s]
Feb  7 14:09:28.604: INFO: Created: latency-svc-zbjdk
Feb  7 14:09:28.616: INFO: Got endpoints: latency-svc-zbjdk [1.343079672s]
Feb  7 14:09:28.725: INFO: Created: latency-svc-c7fsk
Feb  7 14:09:28.754: INFO: Got endpoints: latency-svc-c7fsk [1.469740842s]
Feb  7 14:09:28.762: INFO: Created: latency-svc-h5l4z
Feb  7 14:09:28.778: INFO: Got endpoints: latency-svc-h5l4z [1.287550236s]
Feb  7 14:09:28.922: INFO: Created: latency-svc-j6nzx
Feb  7 14:09:28.928: INFO: Got endpoints: latency-svc-j6nzx [1.282412709s]
Feb  7 14:09:28.972: INFO: Created: latency-svc-m2nm4
Feb  7 14:09:28.995: INFO: Got endpoints: latency-svc-m2nm4 [1.280936318s]
Feb  7 14:09:29.137: INFO: Created: latency-svc-vf27b
Feb  7 14:09:29.147: INFO: Got endpoints: latency-svc-vf27b [1.357705885s]
Feb  7 14:09:29.200: INFO: Created: latency-svc-8rxbn
Feb  7 14:09:29.361: INFO: Created: latency-svc-5rzfc
Feb  7 14:09:29.362: INFO: Got endpoints: latency-svc-8rxbn [1.527728892s]
Feb  7 14:09:29.380: INFO: Got endpoints: latency-svc-5rzfc [1.424284086s]
Feb  7 14:09:29.424: INFO: Created: latency-svc-m8qbg
Feb  7 14:09:29.519: INFO: Got endpoints: latency-svc-m8qbg [1.515622144s]
Feb  7 14:09:29.544: INFO: Created: latency-svc-qxc8m
Feb  7 14:09:29.549: INFO: Got endpoints: latency-svc-qxc8m [1.378737373s]
Feb  7 14:09:29.593: INFO: Created: latency-svc-fc84q
Feb  7 14:09:29.604: INFO: Got endpoints: latency-svc-fc84q [1.418426269s]
Feb  7 14:09:29.704: INFO: Created: latency-svc-lbkrr
Feb  7 14:09:29.713: INFO: Got endpoints: latency-svc-lbkrr [1.470921745s]
Feb  7 14:09:29.757: INFO: Created: latency-svc-vxrfk
Feb  7 14:09:29.757: INFO: Got endpoints: latency-svc-vxrfk [1.368393308s]
Feb  7 14:09:29.871: INFO: Created: latency-svc-hcxjk
Feb  7 14:09:29.889: INFO: Got endpoints: latency-svc-hcxjk [1.45536s]
Feb  7 14:09:29.932: INFO: Created: latency-svc-rb658
Feb  7 14:09:29.940: INFO: Got endpoints: latency-svc-rb658 [1.363700448s]
Feb  7 14:09:30.043: INFO: Created: latency-svc-jfcdk
Feb  7 14:09:30.076: INFO: Got endpoints: latency-svc-jfcdk [1.460847915s]
Feb  7 14:09:30.084: INFO: Created: latency-svc-7wbv2
Feb  7 14:09:30.089: INFO: Got endpoints: latency-svc-7wbv2 [1.334190802s]
Feb  7 14:09:30.193: INFO: Created: latency-svc-dt8mg
Feb  7 14:09:30.206: INFO: Got endpoints: latency-svc-dt8mg [1.428254966s]
Feb  7 14:09:30.240: INFO: Created: latency-svc-9rfmn
Feb  7 14:09:30.257: INFO: Got endpoints: latency-svc-9rfmn [1.328844134s]
Feb  7 14:09:30.365: INFO: Created: latency-svc-l8khb
Feb  7 14:09:30.388: INFO: Got endpoints: latency-svc-l8khb [1.393027022s]
Feb  7 14:09:30.390: INFO: Created: latency-svc-dvck4
Feb  7 14:09:30.392: INFO: Got endpoints: latency-svc-dvck4 [1.244273709s]
Feb  7 14:09:30.435: INFO: Created: latency-svc-qr46v
Feb  7 14:09:30.552: INFO: Created: latency-svc-ztxqr
Feb  7 14:09:30.552: INFO: Got endpoints: latency-svc-qr46v [1.19059915s]
Feb  7 14:09:30.564: INFO: Got endpoints: latency-svc-ztxqr [1.183648169s]
Feb  7 14:09:30.603: INFO: Created: latency-svc-75npg
Feb  7 14:09:30.606: INFO: Got endpoints: latency-svc-75npg [1.086407948s]
Feb  7 14:09:30.642: INFO: Created: latency-svc-qzlgn
Feb  7 14:09:30.721: INFO: Got endpoints: latency-svc-qzlgn [1.171260357s]
Feb  7 14:09:30.781: INFO: Created: latency-svc-rcn79
Feb  7 14:09:30.787: INFO: Got endpoints: latency-svc-rcn79 [1.182345987s]
Feb  7 14:09:30.819: INFO: Created: latency-svc-c8hvv
Feb  7 14:09:30.935: INFO: Got endpoints: latency-svc-c8hvv [1.221500189s]
Feb  7 14:09:30.990: INFO: Created: latency-svc-n7pjh
Feb  7 14:09:30.998: INFO: Got endpoints: latency-svc-n7pjh [1.240274219s]
Feb  7 14:09:31.039: INFO: Created: latency-svc-4gdtm
Feb  7 14:09:31.181: INFO: Got endpoints: latency-svc-4gdtm [1.292113249s]
Feb  7 14:09:31.215: INFO: Created: latency-svc-bmv7h
Feb  7 14:09:31.227: INFO: Got endpoints: latency-svc-bmv7h [1.286115173s]
Feb  7 14:09:31.260: INFO: Created: latency-svc-slkwd
Feb  7 14:09:31.263: INFO: Got endpoints: latency-svc-slkwd [1.186256977s]
Feb  7 14:09:31.377: INFO: Created: latency-svc-pgvtf
Feb  7 14:09:31.383: INFO: Got endpoints: latency-svc-pgvtf [1.294277031s]
Feb  7 14:09:31.439: INFO: Created: latency-svc-2rh6k
Feb  7 14:09:31.473: INFO: Got endpoints: latency-svc-2rh6k [1.266950574s]
Feb  7 14:09:31.588: INFO: Created: latency-svc-cs67n
Feb  7 14:09:31.588: INFO: Got endpoints: latency-svc-cs67n [1.331306312s]
Feb  7 14:09:31.637: INFO: Created: latency-svc-jt8wd
Feb  7 14:09:31.745: INFO: Got endpoints: latency-svc-jt8wd [1.357177553s]
Feb  7 14:09:31.751: INFO: Created: latency-svc-zw2hl
Feb  7 14:09:31.756: INFO: Got endpoints: latency-svc-zw2hl [1.363881634s]
Feb  7 14:09:31.801: INFO: Created: latency-svc-9gjrg
Feb  7 14:09:31.820: INFO: Got endpoints: latency-svc-9gjrg [1.266949267s]
Feb  7 14:09:32.494: INFO: Created: latency-svc-tv2m9
Feb  7 14:09:32.507: INFO: Got endpoints: latency-svc-tv2m9 [1.942591266s]
Feb  7 14:09:32.679: INFO: Created: latency-svc-l8fd9
Feb  7 14:09:32.703: INFO: Got endpoints: latency-svc-l8fd9 [2.096963052s]
Feb  7 14:09:32.827: INFO: Created: latency-svc-fhnkc
Feb  7 14:09:32.831: INFO: Got endpoints: latency-svc-fhnkc [2.1096081s]
Feb  7 14:09:32.885: INFO: Created: latency-svc-mnrqf
Feb  7 14:09:32.890: INFO: Got endpoints: latency-svc-mnrqf [2.103249772s]
Feb  7 14:09:32.992: INFO: Created: latency-svc-jpsn2
Feb  7 14:09:33.003: INFO: Got endpoints: latency-svc-jpsn2 [2.06809217s]
Feb  7 14:09:33.054: INFO: Created: latency-svc-rcqbz
Feb  7 14:09:33.063: INFO: Got endpoints: latency-svc-rcqbz [2.064719808s]
Feb  7 14:09:33.169: INFO: Created: latency-svc-dx9zt
Feb  7 14:09:33.179: INFO: Got endpoints: latency-svc-dx9zt [1.997690407s]
Feb  7 14:09:33.179: INFO: Latencies: [141.195032ms 179.444344ms 210.329578ms 368.475604ms 423.12191ms 598.039212ms 779.061133ms 1.004385582s 1.086407948s 1.164548232s 1.171260357s 1.182345987s 1.183648169s 1.186256977s 1.19059915s 1.202822153s 1.221500189s 1.240274219s 1.244273709s 1.266949267s 1.266950574s 1.280936318s 1.282412709s 1.286115173s 1.287550236s 1.292113249s 1.294277031s 1.328844134s 1.331306312s 1.334190802s 1.343079672s 1.357177553s 1.357705885s 1.363700448s 1.363881634s 1.368393308s 1.378737373s 1.387796123s 1.393027022s 1.418426269s 1.423564539s 1.424284086s 1.428254966s 1.439282663s 1.45536s 1.460847915s 1.469740842s 1.470921745s 1.492024407s 1.492780164s 1.505316533s 1.515622144s 1.523974637s 1.527728892s 1.528359152s 1.540819936s 1.549163568s 1.549410999s 1.552735699s 1.558895168s 1.563345842s 1.567709488s 1.570001867s 1.573198986s 1.574758523s 1.57665298s 1.577426909s 1.577454912s 1.580386651s 1.580390352s 1.582180474s 1.596256048s 1.605290999s 1.605646968s 1.611880776s 1.612961511s 1.615428578s 1.619427439s 1.625830376s 1.629389521s 1.630626973s 1.638296469s 1.639245965s 1.640082387s 1.643434278s 1.648232147s 1.652667989s 1.65488927s 1.659992373s 1.662299485s 1.675403011s 1.677545368s 1.680573019s 1.684514781s 1.686000077s 1.693777399s 1.696362627s 1.696797709s 1.698566033s 1.70955784s 1.714472304s 1.715234798s 1.717648219s 1.721860308s 1.736285072s 1.741086524s 1.752369181s 1.75262087s 1.761288776s 1.76175408s 1.763878268s 1.769841413s 1.77032671s 1.775941613s 1.776144296s 1.782822801s 1.783218018s 1.786811445s 1.787004236s 1.788274898s 1.78942524s 1.790141729s 1.792674138s 1.793536279s 1.794076345s 1.795371444s 1.799153028s 1.79942237s 1.807990134s 1.813731144s 1.816048575s 1.81632098s 1.822463213s 1.824078985s 1.826426169s 1.826641255s 1.82961833s 1.834038857s 1.83745054s 1.837594895s 1.842188736s 1.846082718s 1.848322142s 1.84896127s 1.851669497s 1.85172917s 1.858154994s 1.858767627s 1.869386417s 1.869672859s 1.876719042s 1.877012153s 1.879885253s 1.882018957s 1.889015744s 1.889098788s 1.891621527s 1.892656924s 1.899139844s 1.901577906s 1.906103052s 1.919154887s 1.922032893s 1.924813795s 1.929681858s 1.933653728s 1.93747355s 1.94126508s 1.942591266s 1.945086769s 1.950858057s 1.951810216s 1.955112517s 1.955185676s 1.956910563s 1.958710502s 1.959514134s 1.961195206s 1.962988782s 1.971666368s 1.979490129s 1.994541267s 1.997690407s 2.014821988s 2.061200095s 2.064719808s 2.06809217s 2.071782229s 2.082883596s 2.084449018s 2.096963052s 2.103249772s 2.1096081s 2.116123476s 2.133485638s 2.136758461s 2.155167434s 2.203595225s 2.227619885s 2.247435799s]
Feb  7 14:09:33.179: INFO: 50 %ile: 1.714472304s
Feb  7 14:09:33.179: INFO: 90 %ile: 1.979490129s
Feb  7 14:09:33.179: INFO: 99 %ile: 2.227619885s
Feb  7 14:09:33.179: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:09:33.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2983" for this suite.
Feb  7 14:10:17.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:10:17.364: INFO: namespace svc-latency-2983 deletion completed in 44.173796635s

• [SLOW TEST:74.519 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:10:17.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  7 14:10:17.440: INFO: Waiting up to 5m0s for pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14" in namespace "emptydir-3852" to be "success or failure"
Feb  7 14:10:17.445: INFO: Pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.892385ms
Feb  7 14:10:19.454: INFO: Pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013751754s
Feb  7 14:10:21.469: INFO: Pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028792404s
Feb  7 14:10:23.495: INFO: Pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054221028s
Feb  7 14:10:25.502: INFO: Pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061526109s
STEP: Saw pod success
Feb  7 14:10:25.502: INFO: Pod "pod-621bc86d-d647-4947-90a2-69d2328a2e14" satisfied condition "success or failure"
Feb  7 14:10:25.507: INFO: Trying to get logs from node iruya-node pod pod-621bc86d-d647-4947-90a2-69d2328a2e14 container test-container: 
STEP: delete the pod
Feb  7 14:10:25.607: INFO: Waiting for pod pod-621bc86d-d647-4947-90a2-69d2328a2e14 to disappear
Feb  7 14:10:25.611: INFO: Pod pod-621bc86d-d647-4947-90a2-69d2328a2e14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:10:25.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3852" for this suite.
Feb  7 14:10:31.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:10:31.780: INFO: namespace emptydir-3852 deletion completed in 6.159741482s

• [SLOW TEST:14.415 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:10:31.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-a365f87b-02c4-4f06-b91d-c15fa8645df6
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:10:31.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1663" for this suite.
Feb  7 14:10:37.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:10:38.081: INFO: namespace configmap-1663 deletion completed in 6.144413482s

• [SLOW TEST:6.302 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:10:38.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  7 14:10:38.199: INFO: Waiting up to 5m0s for pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3" in namespace "emptydir-1715" to be "success or failure"
Feb  7 14:10:38.205: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.790308ms
Feb  7 14:10:40.216: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016807027s
Feb  7 14:10:42.225: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026046189s
Feb  7 14:10:44.233: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033166256s
Feb  7 14:10:46.242: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042419351s
Feb  7 14:10:48.265: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065720711s
STEP: Saw pod success
Feb  7 14:10:48.265: INFO: Pod "pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3" satisfied condition "success or failure"
Feb  7 14:10:48.271: INFO: Trying to get logs from node iruya-node pod pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3 container test-container: 
STEP: delete the pod
Feb  7 14:10:48.352: INFO: Waiting for pod pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3 to disappear
Feb  7 14:10:48.358: INFO: Pod pod-7215fbca-c242-4b89-8f2e-9da8a2cb07d3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:10:48.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1715" for this suite.
Feb  7 14:10:54.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:10:54.593: INFO: namespace emptydir-1715 deletion completed in 6.196113143s

• [SLOW TEST:16.512 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:10:54.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  7 14:10:54.752: INFO: Waiting up to 5m0s for pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e" in namespace "emptydir-5413" to be "success or failure"
Feb  7 14:10:54.768: INFO: Pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.94968ms
Feb  7 14:10:56.781: INFO: Pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028924849s
Feb  7 14:10:58.788: INFO: Pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036340108s
Feb  7 14:11:00.800: INFO: Pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048027839s
Feb  7 14:11:02.807: INFO: Pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054796853s
STEP: Saw pod success
Feb  7 14:11:02.807: INFO: Pod "pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e" satisfied condition "success or failure"
Feb  7 14:11:02.811: INFO: Trying to get logs from node iruya-node pod pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e container test-container: 
STEP: delete the pod
Feb  7 14:11:02.923: INFO: Waiting for pod pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e to disappear
Feb  7 14:11:02.933: INFO: Pod pod-c65e5ddd-b17d-476a-bf24-36aa8b6e438e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:11:02.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5413" for this suite.
Feb  7 14:11:08.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:11:09.379: INFO: namespace emptydir-5413 deletion completed in 6.436012389s

• [SLOW TEST:14.785 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:11:09.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 14:11:10.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8980'
Feb  7 14:11:10.443: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 14:11:10.443: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  7 14:11:10.484: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cjp7k]
Feb  7 14:11:10.484: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cjp7k" in namespace "kubectl-8980" to be "running and ready"
Feb  7 14:11:10.544: INFO: Pod "e2e-test-nginx-rc-cjp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 59.868466ms
Feb  7 14:11:12.555: INFO: Pod "e2e-test-nginx-rc-cjp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070499679s
Feb  7 14:11:14.569: INFO: Pod "e2e-test-nginx-rc-cjp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084932207s
Feb  7 14:11:16.581: INFO: Pod "e2e-test-nginx-rc-cjp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097126585s
Feb  7 14:11:18.596: INFO: Pod "e2e-test-nginx-rc-cjp7k": Phase="Running", Reason="", readiness=true. Elapsed: 8.111619392s
Feb  7 14:11:18.596: INFO: Pod "e2e-test-nginx-rc-cjp7k" satisfied condition "running and ready"
Feb  7 14:11:18.596: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cjp7k]
Feb  7 14:11:18.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8980'
Feb  7 14:11:20.311: INFO: stderr: ""
Feb  7 14:11:20.311: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  7 14:11:20.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8980'
Feb  7 14:11:20.786: INFO: stderr: ""
Feb  7 14:11:20.786: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:11:20.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8980" for this suite.
Feb  7 14:11:42.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:11:42.999: INFO: namespace kubectl-8980 deletion completed in 22.196681186s

• [SLOW TEST:33.621 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:11:43.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-a28f782d-bd64-48b3-b18c-46e573ff299d
STEP: Creating a pod to test consume secrets
Feb  7 14:11:43.177: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1" in namespace "projected-6317" to be "success or failure"
Feb  7 14:11:43.197: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.200225ms
Feb  7 14:11:45.225: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047998758s
Feb  7 14:11:47.232: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055203379s
Feb  7 14:11:49.298: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121041705s
Feb  7 14:11:51.303: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126231845s
Feb  7 14:11:53.319: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142249117s
STEP: Saw pod success
Feb  7 14:11:53.319: INFO: Pod "pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1" satisfied condition "success or failure"
Feb  7 14:11:53.329: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 14:11:53.720: INFO: Waiting for pod pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1 to disappear
Feb  7 14:11:53.734: INFO: Pod pod-projected-secrets-f56e2cb4-704b-469a-9dd6-8d323d0520f1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:11:53.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6317" for this suite.
Feb  7 14:11:59.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:11:59.924: INFO: namespace projected-6317 deletion completed in 6.17928153s

• [SLOW TEST:16.924 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:11:59.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-d132e0fd-2743-4287-be3b-391c6af31504
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:12:10.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9060" for this suite.
Feb  7 14:12:32.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:12:32.448: INFO: namespace configmap-9060 deletion completed in 22.140885316s

• [SLOW TEST:32.523 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:12:32.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2560, will wait for the garbage collector to delete the pods
Feb  7 14:12:42.695: INFO: Deleting Job.batch foo took: 12.490282ms
Feb  7 14:12:42.995: INFO: Terminating Job.batch foo pods took: 300.275285ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:13:26.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2560" for this suite.
Feb  7 14:13:32.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:13:32.776: INFO: namespace job-2560 deletion completed in 6.134051478s

• [SLOW TEST:60.327 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:13:32.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:14:05.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9826" for this suite.
Feb  7 14:14:11.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:14:11.305: INFO: namespace namespaces-9826 deletion completed in 6.114609772s
STEP: Destroying namespace "nsdeletetest-5115" for this suite.
Feb  7 14:14:11.308: INFO: Namespace nsdeletetest-5115 was already deleted
STEP: Destroying namespace "nsdeletetest-7200" for this suite.
Feb  7 14:14:17.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:14:17.458: INFO: namespace nsdeletetest-7200 deletion completed in 6.150387879s

• [SLOW TEST:44.682 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:14:17.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:14:17.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2499" for this suite.
Feb  7 14:14:23.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:14:23.718: INFO: namespace services-2499 deletion completed in 6.153086472s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.260 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:14:23.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:14:23.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb" in namespace "projected-8859" to be "success or failure"
Feb  7 14:14:23.846: INFO: Pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.207052ms
Feb  7 14:14:25.861: INFO: Pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033703202s
Feb  7 14:14:27.881: INFO: Pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05381833s
Feb  7 14:14:29.896: INFO: Pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068690511s
Feb  7 14:14:31.904: INFO: Pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076885187s
STEP: Saw pod success
Feb  7 14:14:31.904: INFO: Pod "downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb" satisfied condition "success or failure"
Feb  7 14:14:31.908: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb container client-container: 
STEP: delete the pod
Feb  7 14:14:32.072: INFO: Waiting for pod downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb to disappear
Feb  7 14:14:32.089: INFO: Pod downwardapi-volume-957a369f-cb4d-4678-935a-c7ba9a4f7dcb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:14:32.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8859" for this suite.
Feb  7 14:14:38.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:14:38.212: INFO: namespace projected-8859 deletion completed in 6.109496648s

• [SLOW TEST:14.493 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:14:38.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:14:44.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1886" for this suite.
Feb  7 14:14:50.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:14:51.058: INFO: namespace namespaces-1886 deletion completed in 6.165421775s
STEP: Destroying namespace "nsdeletetest-9667" for this suite.
Feb  7 14:14:51.060: INFO: Namespace nsdeletetest-9667 was already deleted
STEP: Destroying namespace "nsdeletetest-3688" for this suite.
Feb  7 14:14:57.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:14:57.217: INFO: namespace nsdeletetest-3688 deletion completed in 6.156021272s

• [SLOW TEST:19.004 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:14:57.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:14:57.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c" in namespace "downward-api-7342" to be "success or failure"
Feb  7 14:14:57.360: INFO: Pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49607ms
Feb  7 14:14:59.374: INFO: Pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022192558s
Feb  7 14:15:01.385: INFO: Pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033274455s
Feb  7 14:15:03.396: INFO: Pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044265484s
Feb  7 14:15:05.407: INFO: Pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055285777s
STEP: Saw pod success
Feb  7 14:15:05.407: INFO: Pod "downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c" satisfied condition "success or failure"
Feb  7 14:15:05.411: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c container client-container: 
STEP: delete the pod
Feb  7 14:15:05.500: INFO: Waiting for pod downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c to disappear
Feb  7 14:15:05.505: INFO: Pod downwardapi-volume-967b3b50-c61c-4569-85e4-e0840edab41c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:15:05.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7342" for this suite.
Feb  7 14:15:11.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:15:11.810: INFO: namespace downward-api-7342 deletion completed in 6.29722895s

• [SLOW TEST:14.593 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:15:11.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  7 14:15:11.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-881'
Feb  7 14:15:12.352: INFO: stderr: ""
Feb  7 14:15:12.352: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 14:15:12.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-881'
Feb  7 14:15:12.661: INFO: stderr: ""
Feb  7 14:15:12.661: INFO: stdout: "update-demo-nautilus-8sz8v update-demo-nautilus-xg84c "
Feb  7 14:15:12.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sz8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:12.749: INFO: stderr: ""
Feb  7 14:15:12.749: INFO: stdout: ""
Feb  7 14:15:12.749: INFO: update-demo-nautilus-8sz8v is created but not running
Feb  7 14:15:17.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-881'
Feb  7 14:15:17.853: INFO: stderr: ""
Feb  7 14:15:17.853: INFO: stdout: "update-demo-nautilus-8sz8v update-demo-nautilus-xg84c "
Feb  7 14:15:17.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sz8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:19.329: INFO: stderr: ""
Feb  7 14:15:19.329: INFO: stdout: ""
Feb  7 14:15:19.329: INFO: update-demo-nautilus-8sz8v is created but not running
Feb  7 14:15:24.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-881'
Feb  7 14:15:24.437: INFO: stderr: ""
Feb  7 14:15:24.437: INFO: stdout: "update-demo-nautilus-8sz8v update-demo-nautilus-xg84c "
Feb  7 14:15:24.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sz8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:24.646: INFO: stderr: ""
Feb  7 14:15:24.646: INFO: stdout: "true"
Feb  7 14:15:24.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sz8v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:24.717: INFO: stderr: ""
Feb  7 14:15:24.717: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:15:24.717: INFO: validating pod update-demo-nautilus-8sz8v
Feb  7 14:15:24.728: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:15:24.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:15:24.728: INFO: update-demo-nautilus-8sz8v is verified up and running
Feb  7 14:15:24.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xg84c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:24.834: INFO: stderr: ""
Feb  7 14:15:24.834: INFO: stdout: "true"
Feb  7 14:15:24.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xg84c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:24.914: INFO: stderr: ""
Feb  7 14:15:24.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:15:24.914: INFO: validating pod update-demo-nautilus-xg84c
Feb  7 14:15:24.933: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:15:24.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:15:24.933: INFO: update-demo-nautilus-xg84c is verified up and running
STEP: rolling-update to new replication controller
Feb  7 14:15:24.936: INFO: scanned /root for discovery docs: 
Feb  7 14:15:24.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-881'
Feb  7 14:15:54.221: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  7 14:15:54.221: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 14:15:54.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-881'
Feb  7 14:15:54.383: INFO: stderr: ""
Feb  7 14:15:54.383: INFO: stdout: "update-demo-kitten-6bkcq update-demo-kitten-nthmn update-demo-nautilus-xg84c "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  7 14:15:59.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-881'
Feb  7 14:15:59.528: INFO: stderr: ""
Feb  7 14:15:59.528: INFO: stdout: "update-demo-kitten-6bkcq update-demo-kitten-nthmn "
Feb  7 14:15:59.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6bkcq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:59.638: INFO: stderr: ""
Feb  7 14:15:59.638: INFO: stdout: "true"
Feb  7 14:15:59.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6bkcq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:59.765: INFO: stderr: ""
Feb  7 14:15:59.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  7 14:15:59.765: INFO: validating pod update-demo-kitten-6bkcq
Feb  7 14:15:59.785: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  7 14:15:59.785: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  7 14:15:59.785: INFO: update-demo-kitten-6bkcq is verified up and running
Feb  7 14:15:59.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nthmn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:15:59.900: INFO: stderr: ""
Feb  7 14:15:59.900: INFO: stdout: "true"
Feb  7 14:15:59.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nthmn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-881'
Feb  7 14:16:00.036: INFO: stderr: ""
Feb  7 14:16:00.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  7 14:16:00.036: INFO: validating pod update-demo-kitten-nthmn
Feb  7 14:16:00.043: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  7 14:16:00.043: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  7 14:16:00.043: INFO: update-demo-kitten-nthmn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:16:00.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-881" for this suite.
Feb  7 14:16:22.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:16:22.213: INFO: namespace kubectl-881 deletion completed in 22.165347113s

• [SLOW TEST:70.403 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:16:22.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  7 14:16:30.397: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  7 14:16:50.556: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:16:50.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3950" for this suite.
Feb  7 14:16:56.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:16:56.729: INFO: namespace pods-3950 deletion completed in 6.154871141s

• [SLOW TEST:34.515 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:16:56.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  7 14:16:56.841: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 14:16:56.858: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 14:16:56.863: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  7 14:16:56.876: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.876: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 14:16:56.876: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  7 14:16:56.876: INFO: 	Container weave ready: true, restart count 0
Feb  7 14:16:56.876: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 14:16:56.876: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  7 14:16:56.932: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  7 14:16:56.932: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  7 14:16:56.932: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container coredns ready: true, restart count 0
Feb  7 14:16:56.932: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container etcd ready: true, restart count 0
Feb  7 14:16:56.932: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container weave ready: true, restart count 0
Feb  7 14:16:56.932: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 14:16:56.932: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container coredns ready: true, restart count 0
Feb  7 14:16:56.932: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  7 14:16:56.932: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  7 14:16:56.932: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-46b593cd-caec-4a31-af7a-52ab921375d2 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-46b593cd-caec-4a31-af7a-52ab921375d2 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-46b593cd-caec-4a31-af7a-52ab921375d2
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:17:15.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6685" for this suite.
Feb  7 14:17:45.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:17:45.313: INFO: namespace sched-pred-6685 deletion completed in 30.128001074s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:48.583 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:17:45.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb  7 14:17:45.454: INFO: Waiting up to 5m0s for pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922" in namespace "var-expansion-7246" to be "success or failure"
Feb  7 14:17:45.468: INFO: Pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922": Phase="Pending", Reason="", readiness=false. Elapsed: 13.604424ms
Feb  7 14:17:47.478: INFO: Pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024082877s
Feb  7 14:17:49.550: INFO: Pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095615346s
Feb  7 14:17:51.557: INFO: Pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102821737s
Feb  7 14:17:53.567: INFO: Pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112681305s
STEP: Saw pod success
Feb  7 14:17:53.567: INFO: Pod "var-expansion-3fa35049-788b-4c96-8958-422e09865922" satisfied condition "success or failure"
Feb  7 14:17:53.575: INFO: Trying to get logs from node iruya-node pod var-expansion-3fa35049-788b-4c96-8958-422e09865922 container dapi-container: 
STEP: delete the pod
Feb  7 14:17:53.651: INFO: Waiting for pod var-expansion-3fa35049-788b-4c96-8958-422e09865922 to disappear
Feb  7 14:17:53.661: INFO: Pod var-expansion-3fa35049-788b-4c96-8958-422e09865922 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:17:53.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7246" for this suite.
Feb  7 14:17:59.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:17:59.931: INFO: namespace var-expansion-7246 deletion completed in 6.263066019s

• [SLOW TEST:14.617 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:17:59.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb  7 14:17:59.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7796'
Feb  7 14:18:00.391: INFO: stderr: ""
Feb  7 14:18:00.391: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb  7 14:18:01.403: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:01.403: INFO: Found 0 / 1
Feb  7 14:18:02.405: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:02.405: INFO: Found 0 / 1
Feb  7 14:18:03.403: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:03.403: INFO: Found 0 / 1
Feb  7 14:18:04.407: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:04.407: INFO: Found 0 / 1
Feb  7 14:18:05.400: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:05.400: INFO: Found 0 / 1
Feb  7 14:18:06.401: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:06.401: INFO: Found 0 / 1
Feb  7 14:18:07.402: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:07.402: INFO: Found 0 / 1
Feb  7 14:18:08.405: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:08.405: INFO: Found 1 / 1
Feb  7 14:18:08.405: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 14:18:08.427: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:18:08.427: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  7 14:18:08.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9dx96 redis-master --namespace=kubectl-7796'
Feb  7 14:18:08.591: INFO: stderr: ""
Feb  7 14:18:08.591: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Feb 14:18:06.843 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Feb 14:18:06.844 # Server started, Redis version 3.2.12\n1:M 07 Feb 14:18:06.845 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Feb 14:18:06.845 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  7 14:18:08.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9dx96 redis-master --namespace=kubectl-7796 --tail=1'
Feb  7 14:18:09.170: INFO: stderr: ""
Feb  7 14:18:09.170: INFO: stdout: "1:M 07 Feb 14:18:06.845 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  7 14:18:09.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9dx96 redis-master --namespace=kubectl-7796 --limit-bytes=1'
Feb  7 14:18:09.332: INFO: stderr: ""
Feb  7 14:18:09.332: INFO: stdout: " "
STEP: exposing timestamps
Feb  7 14:18:09.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9dx96 redis-master --namespace=kubectl-7796 --tail=1 --timestamps'
Feb  7 14:18:09.524: INFO: stderr: ""
Feb  7 14:18:09.525: INFO: stdout: "2020-02-07T14:18:06.84737515Z 1:M 07 Feb 14:18:06.845 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  7 14:18:12.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9dx96 redis-master --namespace=kubectl-7796 --since=1s'
Feb  7 14:18:12.222: INFO: stderr: ""
Feb  7 14:18:12.222: INFO: stdout: ""
Feb  7 14:18:12.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9dx96 redis-master --namespace=kubectl-7796 --since=24h'
Feb  7 14:18:12.462: INFO: stderr: ""
Feb  7 14:18:12.462: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Feb 14:18:06.843 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Feb 14:18:06.844 # Server started, Redis version 3.2.12\n1:M 07 Feb 14:18:06.845 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Feb 14:18:06.845 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb  7 14:18:12.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7796'
Feb  7 14:18:12.574: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:18:12.574: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  7 14:18:12.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7796'
Feb  7 14:18:12.660: INFO: stderr: "No resources found.\n"
Feb  7 14:18:12.660: INFO: stdout: ""
Feb  7 14:18:12.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7796 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 14:18:12.770: INFO: stderr: ""
Feb  7 14:18:12.770: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:18:12.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7796" for this suite.
Feb  7 14:18:34.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:18:34.928: INFO: namespace kubectl-7796 deletion completed in 22.133747271s

• [SLOW TEST:34.996 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:18:34.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  7 14:18:35.134: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8066,SelfLink:/api/v1/namespaces/watch-8066/configmaps/e2e-watch-test-resource-version,UID:5073bf5d-aeb3-4ccf-9420-39b456b48b29,ResourceVersion:23453523,Generation:0,CreationTimestamp:2020-02-07 14:18:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 14:18:35.134: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8066,SelfLink:/api/v1/namespaces/watch-8066/configmaps/e2e-watch-test-resource-version,UID:5073bf5d-aeb3-4ccf-9420-39b456b48b29,ResourceVersion:23453525,Generation:0,CreationTimestamp:2020-02-07 14:18:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:18:35.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8066" for this suite.
Feb  7 14:18:41.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:18:41.305: INFO: namespace watch-8066 deletion completed in 6.16196563s

• [SLOW TEST:6.377 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:18:41.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  7 14:18:53.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-05a53b63-5e15-4d65-9f85-822ea8fe9b72 -c busybox-main-container --namespace=emptydir-9841 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  7 14:18:54.058: INFO: stderr: "I0207 14:18:53.684336    2383 log.go:172] (0xc000938370) (0xc00082e8c0) Create stream\nI0207 14:18:53.684470    2383 log.go:172] (0xc000938370) (0xc00082e8c0) Stream added, broadcasting: 1\nI0207 14:18:53.691514    2383 log.go:172] (0xc000938370) Reply frame received for 1\nI0207 14:18:53.691544    2383 log.go:172] (0xc000938370) (0xc00056c1e0) Create stream\nI0207 14:18:53.691551    2383 log.go:172] (0xc000938370) (0xc00056c1e0) Stream added, broadcasting: 3\nI0207 14:18:53.693431    2383 log.go:172] (0xc000938370) Reply frame received for 3\nI0207 14:18:53.693454    2383 log.go:172] (0xc000938370) (0xc00056c280) Create stream\nI0207 14:18:53.693461    2383 log.go:172] (0xc000938370) (0xc00056c280) Stream added, broadcasting: 5\nI0207 14:18:53.694704    2383 log.go:172] (0xc000938370) Reply frame received for 5\nI0207 14:18:53.819986    2383 log.go:172] (0xc000938370) Data frame received for 3\nI0207 14:18:53.820101    2383 log.go:172] (0xc00056c1e0) (3) Data frame handling\nI0207 14:18:53.820173    2383 log.go:172] (0xc00056c1e0) (3) Data frame sent\nI0207 14:18:54.052065    2383 log.go:172] (0xc000938370) Data frame received for 1\nI0207 14:18:54.052227    2383 log.go:172] (0xc000938370) (0xc00056c280) Stream removed, broadcasting: 5\nI0207 14:18:54.052281    2383 log.go:172] (0xc00082e8c0) (1) Data frame handling\nI0207 14:18:54.052291    2383 log.go:172] (0xc00082e8c0) (1) Data frame sent\nI0207 14:18:54.052308    2383 log.go:172] (0xc000938370) (0xc00056c1e0) Stream removed, broadcasting: 3\nI0207 14:18:54.052323    2383 log.go:172] (0xc000938370) (0xc00082e8c0) Stream removed, broadcasting: 1\nI0207 14:18:54.052332    2383 log.go:172] (0xc000938370) Go away received\nI0207 14:18:54.052751    2383 log.go:172] (0xc000938370) (0xc00082e8c0) Stream removed, broadcasting: 1\nI0207 14:18:54.052760    2383 log.go:172] (0xc000938370) (0xc00056c1e0) Stream removed, broadcasting: 3\nI0207 14:18:54.052764    2383 log.go:172] (0xc000938370) (0xc00056c280) Stream removed, broadcasting: 5\n"
Feb  7 14:18:54.058: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:18:54.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9841" for this suite.
Feb  7 14:19:00.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:19:00.241: INFO: namespace emptydir-9841 deletion completed in 6.169544047s

• [SLOW TEST:18.935 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:19:00.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  7 14:19:10.974: INFO: Successfully updated pod "pod-update-fc1aaabd-50f9-4a91-80cc-edc718a1df27"
STEP: verifying the updated pod is in kubernetes
Feb  7 14:19:11.046: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:19:11.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3435" for this suite.
Feb  7 14:19:35.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:19:35.249: INFO: namespace pods-3435 deletion completed in 24.190477442s

• [SLOW TEST:35.008 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:19:35.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  7 14:19:35.396: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:19:50.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2331" for this suite.
Feb  7 14:19:57.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:19:57.192: INFO: namespace pods-2331 deletion completed in 6.190127499s

• [SLOW TEST:21.943 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:19:57.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 14:20:06.763: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:20:06.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1822" for this suite.
Feb  7 14:20:12.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:20:13.033: INFO: namespace container-runtime-1822 deletion completed in 6.242556587s

• [SLOW TEST:15.840 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:20:13.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:20:21.472: INFO: Waiting up to 5m0s for pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c" in namespace "pods-7445" to be "success or failure"
Feb  7 14:20:21.480: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.824721ms
Feb  7 14:20:23.494: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022219796s
Feb  7 14:20:25.500: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028498243s
Feb  7 14:20:27.507: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035459649s
Feb  7 14:20:29.516: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04465892s
Feb  7 14:20:31.527: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055128995s
STEP: Saw pod success
Feb  7 14:20:31.527: INFO: Pod "client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c" satisfied condition "success or failure"
Feb  7 14:20:31.533: INFO: Trying to get logs from node iruya-node pod client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c container env3cont: 
STEP: delete the pod
Feb  7 14:20:31.629: INFO: Waiting for pod client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c to disappear
Feb  7 14:20:31.638: INFO: Pod client-envvars-b57babc8-b4e5-45fe-bcd8-7efddae9815c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:20:31.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7445" for this suite.
Feb  7 14:21:19.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:21:19.801: INFO: namespace pods-7445 deletion completed in 48.152817727s

• [SLOW TEST:66.768 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:21:19.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6730.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6730.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 14:21:32.243: INFO: File wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-9187b887-958e-4ec4-8a87-200411ee8073 contains '' instead of 'foo.example.com.'
Feb  7 14:21:32.253: INFO: File jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-9187b887-958e-4ec4-8a87-200411ee8073 contains '' instead of 'foo.example.com.'
Feb  7 14:21:32.253: INFO: Lookups using dns-6730/dns-test-9187b887-958e-4ec4-8a87-200411ee8073 failed for: [wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local]

Feb  7 14:21:37.278: INFO: DNS probes using dns-test-9187b887-958e-4ec4-8a87-200411ee8073 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6730.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6730.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 14:21:53.490: INFO: File wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 contains '' instead of 'bar.example.com.'
Feb  7 14:21:53.503: INFO: File jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 contains '' instead of 'bar.example.com.'
Feb  7 14:21:53.503: INFO: Lookups using dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 failed for: [wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local]

Feb  7 14:21:58.533: INFO: File wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  7 14:21:58.547: INFO: File jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 contains '' instead of 'bar.example.com.'
Feb  7 14:21:58.547: INFO: Lookups using dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 failed for: [wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local]

Feb  7 14:22:03.515: INFO: File wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  7 14:22:03.521: INFO: File jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  7 14:22:03.521: INFO: Lookups using dns-6730/dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 failed for: [wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local]

Feb  7 14:22:08.545: INFO: DNS probes using dns-test-5be5ecc7-1dad-47b1-bb71-1319ac26edb7 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6730.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6730.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6730.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 14:22:26.924: INFO: File jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local from pod  dns-6730/dns-test-46304026-943f-4999-8721-2baa70b7cfd9 contains '' instead of '10.98.79.121'
Feb  7 14:22:26.924: INFO: Lookups using dns-6730/dns-test-46304026-943f-4999-8721-2baa70b7cfd9 failed for: [jessie_udp@dns-test-service-3.dns-6730.svc.cluster.local]

Feb  7 14:22:32.184: INFO: DNS probes using dns-test-46304026-943f-4999-8721-2baa70b7cfd9 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:22:32.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6730" for this suite.
Feb  7 14:22:40.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:22:40.707: INFO: namespace dns-6730 deletion completed in 8.163560439s

• [SLOW TEST:80.905 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:22:40.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  7 14:22:40.848: INFO: Waiting up to 5m0s for pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7" in namespace "downward-api-9340" to be "success or failure"
Feb  7 14:22:40.925: INFO: Pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7": Phase="Pending", Reason="", readiness=false. Elapsed: 77.33974ms
Feb  7 14:22:42.941: INFO: Pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093151579s
Feb  7 14:22:44.950: INFO: Pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101582489s
Feb  7 14:22:46.970: INFO: Pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122238547s
Feb  7 14:22:48.981: INFO: Pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133150378s
STEP: Saw pod success
Feb  7 14:22:48.981: INFO: Pod "downward-api-ed373713-26e3-4983-9137-08afd72438e7" satisfied condition "success or failure"
Feb  7 14:22:48.985: INFO: Trying to get logs from node iruya-node pod downward-api-ed373713-26e3-4983-9137-08afd72438e7 container dapi-container: 
STEP: delete the pod
Feb  7 14:22:49.054: INFO: Waiting for pod downward-api-ed373713-26e3-4983-9137-08afd72438e7 to disappear
Feb  7 14:22:49.068: INFO: Pod downward-api-ed373713-26e3-4983-9137-08afd72438e7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:22:49.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9340" for this suite.
Feb  7 14:22:55.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:22:55.217: INFO: namespace downward-api-9340 deletion completed in 6.125333385s

• [SLOW TEST:14.510 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:22:55.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6216
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6216
STEP: Deleting pre-stop pod
Feb  7 14:23:16.601: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:23:16.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6216" for this suite.
Feb  7 14:23:58.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:23:58.762: INFO: namespace prestop-6216 deletion completed in 42.125108069s

• [SLOW TEST:63.545 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:23:58.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-db097bf4-c3d3-4f3a-9c71-85aebc84d12b
STEP: Creating a pod to test consume configMaps
Feb  7 14:23:58.967: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00" in namespace "configmap-2542" to be "success or failure"
Feb  7 14:23:58.976: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255571ms
Feb  7 14:24:00.983: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015717542s
Feb  7 14:24:03.000: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032949457s
Feb  7 14:24:05.010: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042422276s
Feb  7 14:24:07.019: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051735835s
Feb  7 14:24:09.027: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059212512s
STEP: Saw pod success
Feb  7 14:24:09.027: INFO: Pod "pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00" satisfied condition "success or failure"
Feb  7 14:24:09.031: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00 container configmap-volume-test: 
STEP: delete the pod
Feb  7 14:24:09.103: INFO: Waiting for pod pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00 to disappear
Feb  7 14:24:09.112: INFO: Pod pod-configmaps-4d0195a7-b322-411d-8b5a-aae5c4cd6e00 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:24:09.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2542" for this suite.
Feb  7 14:24:15.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:24:15.352: INFO: namespace configmap-2542 deletion completed in 6.235810649s

• [SLOW TEST:16.589 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:24:15.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:24:23.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4452" for this suite.
Feb  7 14:24:29.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:24:29.679: INFO: namespace kubelet-test-4452 deletion completed in 6.182790258s

• [SLOW TEST:14.326 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:24:29.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-637da4d7-a135-47b7-864f-565b46aca817
STEP: Creating a pod to test consume secrets
Feb  7 14:24:29.796: INFO: Waiting up to 5m0s for pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713" in namespace "secrets-4623" to be "success or failure"
Feb  7 14:24:29.804: INFO: Pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713": Phase="Pending", Reason="", readiness=false. Elapsed: 7.767639ms
Feb  7 14:24:31.816: INFO: Pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019316269s
Feb  7 14:24:33.828: INFO: Pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031921151s
Feb  7 14:24:35.835: INFO: Pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038680711s
Feb  7 14:24:37.844: INFO: Pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04803933s
STEP: Saw pod success
Feb  7 14:24:37.845: INFO: Pod "pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713" satisfied condition "success or failure"
Feb  7 14:24:37.850: INFO: Trying to get logs from node iruya-node pod pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713 container secret-volume-test: 
STEP: delete the pod
Feb  7 14:24:37.999: INFO: Waiting for pod pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713 to disappear
Feb  7 14:24:38.011: INFO: Pod pod-secrets-8e01f5d9-8921-4b7f-80d0-b2235456c713 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:24:38.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4623" for this suite.
Feb  7 14:24:44.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:24:44.144: INFO: namespace secrets-4623 deletion completed in 6.124417245s

• [SLOW TEST:14.466 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:24:44.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  7 14:24:44.212: INFO: Waiting up to 5m0s for pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d" in namespace "emptydir-5032" to be "success or failure"
Feb  7 14:24:44.240: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.80112ms
Feb  7 14:24:46.247: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034543445s
Feb  7 14:24:48.256: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043523184s
Feb  7 14:24:50.266: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053467553s
Feb  7 14:24:52.275: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062451992s
Feb  7 14:24:54.282: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06977127s
STEP: Saw pod success
Feb  7 14:24:54.282: INFO: Pod "pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d" satisfied condition "success or failure"
Feb  7 14:24:54.288: INFO: Trying to get logs from node iruya-node pod pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d container test-container: 
STEP: delete the pod
Feb  7 14:24:54.440: INFO: Waiting for pod pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d to disappear
Feb  7 14:24:54.450: INFO: Pod pod-a4d65f26-bce7-4fc7-bcb2-60ce3371aa4d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:24:54.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5032" for this suite.
Feb  7 14:25:00.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:25:00.646: INFO: namespace emptydir-5032 deletion completed in 6.187771782s

• [SLOW TEST:16.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:25:00.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  7 14:25:00.741: INFO: Waiting up to 5m0s for pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134" in namespace "emptydir-7587" to be "success or failure"
Feb  7 14:25:00.775: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134": Phase="Pending", Reason="", readiness=false. Elapsed: 34.212654ms
Feb  7 14:25:02.805: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064482889s
Feb  7 14:25:04.812: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071562514s
Feb  7 14:25:06.817: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076615186s
Feb  7 14:25:08.825: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084585137s
Feb  7 14:25:10.831: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090406821s
STEP: Saw pod success
Feb  7 14:25:10.831: INFO: Pod "pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134" satisfied condition "success or failure"
Feb  7 14:25:10.833: INFO: Trying to get logs from node iruya-node pod pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134 container test-container: 
STEP: delete the pod
Feb  7 14:25:10.879: INFO: Waiting for pod pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134 to disappear
Feb  7 14:25:10.888: INFO: Pod pod-26e2df5b-2ac5-42dc-86b2-5c8d7ea6c134 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:25:10.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7587" for this suite.
Feb  7 14:25:16.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:25:17.089: INFO: namespace emptydir-7587 deletion completed in 6.193294496s

• [SLOW TEST:16.443 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:25:17.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  7 14:25:29.758: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6189 pod-service-account-a807abc1-27be-460f-ad6d-e33e50bd5459 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  7 14:25:32.161: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6189 pod-service-account-a807abc1-27be-460f-ad6d-e33e50bd5459 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  7 14:25:32.943: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6189 pod-service-account-a807abc1-27be-460f-ad6d-e33e50bd5459 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:25:33.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6189" for this suite.
Feb  7 14:25:39.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:25:39.733: INFO: namespace svcaccounts-6189 deletion completed in 6.202890307s

• [SLOW TEST:22.643 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:25:39.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 14:25:50.185: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:25:50.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3140" for this suite.
Feb  7 14:25:56.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:25:56.428: INFO: namespace container-runtime-3140 deletion completed in 6.156620192s

• [SLOW TEST:16.695 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:25:56.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:25:57.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad" in namespace "downward-api-5589" to be "success or failure"
Feb  7 14:25:57.159: INFO: Pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad": Phase="Pending", Reason="", readiness=false. Elapsed: 47.443533ms
Feb  7 14:25:59.166: INFO: Pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055096943s
Feb  7 14:26:01.178: INFO: Pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067208315s
Feb  7 14:26:03.219: INFO: Pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107821102s
Feb  7 14:26:05.235: INFO: Pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12393566s
STEP: Saw pod success
Feb  7 14:26:05.235: INFO: Pod "downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad" satisfied condition "success or failure"
Feb  7 14:26:05.241: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad container client-container: 
STEP: delete the pod
Feb  7 14:26:05.338: INFO: Waiting for pod downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad to disappear
Feb  7 14:26:05.356: INFO: Pod downwardapi-volume-d410dcc1-0925-452c-85c5-93d0fc4554ad no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:26:05.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5589" for this suite.
Feb  7 14:26:11.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:26:11.505: INFO: namespace downward-api-5589 deletion completed in 6.139732863s

• [SLOW TEST:15.077 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:26:11.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  7 14:26:11.579: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:26:24.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6696" for this suite.
Feb  7 14:26:30.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:26:30.804: INFO: namespace init-container-6696 deletion completed in 6.135714223s

• [SLOW TEST:19.298 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:26:30.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  7 14:26:30.968: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-899,SelfLink:/api/v1/namespaces/watch-899/configmaps/e2e-watch-test-watch-closed,UID:96510d58-5f57-4424-9249-ace181b46485,ResourceVersion:23454722,Generation:0,CreationTimestamp:2020-02-07 14:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 14:26:30.968: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-899,SelfLink:/api/v1/namespaces/watch-899/configmaps/e2e-watch-test-watch-closed,UID:96510d58-5f57-4424-9249-ace181b46485,ResourceVersion:23454723,Generation:0,CreationTimestamp:2020-02-07 14:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  7 14:26:31.002: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-899,SelfLink:/api/v1/namespaces/watch-899/configmaps/e2e-watch-test-watch-closed,UID:96510d58-5f57-4424-9249-ace181b46485,ResourceVersion:23454724,Generation:0,CreationTimestamp:2020-02-07 14:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 14:26:31.003: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-899,SelfLink:/api/v1/namespaces/watch-899/configmaps/e2e-watch-test-watch-closed,UID:96510d58-5f57-4424-9249-ace181b46485,ResourceVersion:23454725,Generation:0,CreationTimestamp:2020-02-07 14:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:26:31.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-899" for this suite.
Feb  7 14:26:37.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:26:37.208: INFO: namespace watch-899 deletion completed in 6.196585098s

• [SLOW TEST:6.403 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:26:37.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:26:37.360: INFO: Create a RollingUpdate DaemonSet
Feb  7 14:26:37.375: INFO: Check that daemon pods launch on every node of the cluster
Feb  7 14:26:37.474: INFO: Number of nodes with available pods: 0
Feb  7 14:26:37.474: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:38.504: INFO: Number of nodes with available pods: 0
Feb  7 14:26:38.504: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:39.492: INFO: Number of nodes with available pods: 0
Feb  7 14:26:39.492: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:40.588: INFO: Number of nodes with available pods: 0
Feb  7 14:26:40.588: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:41.499: INFO: Number of nodes with available pods: 0
Feb  7 14:26:41.499: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:42.886: INFO: Number of nodes with available pods: 0
Feb  7 14:26:42.886: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:43.490: INFO: Number of nodes with available pods: 0
Feb  7 14:26:43.490: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:44.881: INFO: Number of nodes with available pods: 0
Feb  7 14:26:44.881: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:45.485: INFO: Number of nodes with available pods: 0
Feb  7 14:26:45.485: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:46.515: INFO: Number of nodes with available pods: 0
Feb  7 14:26:46.515: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:47.491: INFO: Number of nodes with available pods: 1
Feb  7 14:26:47.492: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:26:48.494: INFO: Number of nodes with available pods: 2
Feb  7 14:26:48.495: INFO: Number of running nodes: 2, number of available pods: 2
Feb  7 14:26:48.495: INFO: Update the DaemonSet to trigger a rollout
Feb  7 14:26:48.510: INFO: Updating DaemonSet daemon-set
Feb  7 14:27:06.628: INFO: Roll back the DaemonSet before rollout is complete
Feb  7 14:27:06.635: INFO: Updating DaemonSet daemon-set
Feb  7 14:27:06.635: INFO: Make sure DaemonSet rollback is complete
Feb  7 14:27:06.669: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:06.669: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:07.696: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:07.696: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:08.715: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:08.715: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:09.693: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:09.693: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:10.692: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:10.692: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:11.692: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:11.692: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:12.700: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:12.700: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:13.699: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:13.699: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:14.692: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:14.693: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:15.692: INFO: Wrong image for pod: daemon-set-rphvm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  7 14:27:15.692: INFO: Pod daemon-set-rphvm is not available
Feb  7 14:27:16.785: INFO: Pod daemon-set-cxq9r is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6107, will wait for the garbage collector to delete the pods
Feb  7 14:27:16.916: INFO: Deleting DaemonSet.extensions daemon-set took: 54.885307ms
Feb  7 14:27:17.216: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266263ms
Feb  7 14:27:27.951: INFO: Number of nodes with available pods: 0
Feb  7 14:27:27.951: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 14:27:27.958: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6107/daemonsets","resourceVersion":"23454878"},"items":null}

Feb  7 14:27:27.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6107/pods","resourceVersion":"23454878"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:27:27.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6107" for this suite.
Feb  7 14:27:34.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:27:34.089: INFO: namespace daemonsets-6107 deletion completed in 6.109659528s

• [SLOW TEST:56.881 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:27:34.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:27:42.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1762" for this suite.
Feb  7 14:27:48.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:27:48.481: INFO: namespace emptydir-wrapper-1762 deletion completed in 6.184383623s

• [SLOW TEST:14.392 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:27:48.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  7 14:27:48.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-307'
Feb  7 14:27:48.948: INFO: stderr: ""
Feb  7 14:27:48.948: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  7 14:27:49.961: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:49.961: INFO: Found 0 / 1
Feb  7 14:27:50.955: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:50.955: INFO: Found 0 / 1
Feb  7 14:27:51.961: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:51.961: INFO: Found 0 / 1
Feb  7 14:27:52.958: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:52.958: INFO: Found 0 / 1
Feb  7 14:27:53.959: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:53.959: INFO: Found 0 / 1
Feb  7 14:27:54.956: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:54.956: INFO: Found 0 / 1
Feb  7 14:27:55.955: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:55.955: INFO: Found 0 / 1
Feb  7 14:27:57.037: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:57.037: INFO: Found 1 / 1
Feb  7 14:27:57.037: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  7 14:27:57.046: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:57.046: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 14:27:57.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cwbpv --namespace=kubectl-307 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  7 14:27:57.235: INFO: stderr: ""
Feb  7 14:27:57.235: INFO: stdout: "pod/redis-master-cwbpv patched\n"
STEP: checking annotations
Feb  7 14:27:57.244: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:27:57.244: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:27:57.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-307" for this suite.
Feb  7 14:28:19.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:28:19.431: INFO: namespace kubectl-307 deletion completed in 22.180933057s

• [SLOW TEST:30.949 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:28:19.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:28:29.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8754" for this suite.
Feb  7 14:29:21.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:29:21.789: INFO: namespace kubelet-test-8754 deletion completed in 52.160133215s

• [SLOW TEST:62.358 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:29:21.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:29:21.883: INFO: Creating deployment "nginx-deployment"
Feb  7 14:29:21.893: INFO: Waiting for observed generation 1
Feb  7 14:29:24.531: INFO: Waiting for all required pods to come up
Feb  7 14:29:24.545: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  7 14:29:53.389: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  7 14:29:53.396: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  7 14:29:53.408: INFO: Updating deployment nginx-deployment
Feb  7 14:29:53.408: INFO: Waiting for observed generation 2
Feb  7 14:29:56.740: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  7 14:29:56.768: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  7 14:29:57.165: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  7 14:29:57.190: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  7 14:29:57.190: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  7 14:29:57.240: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  7 14:29:57.249: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  7 14:29:57.249: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  7 14:29:57.334: INFO: Updating deployment nginx-deployment
Feb  7 14:29:57.335: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  7 14:29:58.655: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  7 14:29:58.742: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  7 14:30:04.471: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5337,SelfLink:/apis/apps/v1/namespaces/deployment-5337/deployments/nginx-deployment,UID:691d82da-9543-425c-906d-f8fdceba5a33,ResourceVersion:23455417,Generation:3,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-07 14:29:58 +0000 UTC 2020-02-07 14:29:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-07 14:29:59 +0000 UTC 2020-02-07 14:29:21 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  7 14:30:07.105: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5337,SelfLink:/apis/apps/v1/namespaces/deployment-5337/replicasets/nginx-deployment-55fb7cb77f,UID:78112f44-6082-4a20-8a91-41147414bc7d,ResourceVersion:23455411,Generation:3,CreationTimestamp:2020-02-07 14:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 691d82da-9543-425c-906d-f8fdceba5a33 0xc0022fac47 0xc0022fac48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 14:30:07.105: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  7 14:30:07.105: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5337,SelfLink:/apis/apps/v1/namespaces/deployment-5337/replicasets/nginx-deployment-7b8c6f4498,UID:c6eae662-41cd-4055-9bce-5f74915aa344,ResourceVersion:23455413,Generation:3,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 691d82da-9543-425c-906d-f8fdceba5a33 0xc0022fad37 0xc0022fad38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  7 14:30:07.749: INFO: Pod "nginx-deployment-55fb7cb77f-4n5x7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4n5x7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-4n5x7,UID:64259a5a-554e-4ae1-a70f-a70863ebf8b5,ResourceVersion:23455409,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fb6d7 0xc0022fb6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fb750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fb770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.750: INFO: Pod "nginx-deployment-55fb7cb77f-5nzx8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5nzx8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-5nzx8,UID:15971fdf-5a07-42ce-a812-d43be8a3c9e1,ResourceVersion:23455387,Generation:0,CreationTimestamp:2020-02-07 14:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fb7f7 0xc0022fb7f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fb880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fb8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-07 14:29:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.750: INFO: Pod "nginx-deployment-55fb7cb77f-6rzjq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6rzjq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-6rzjq,UID:61d8dcfe-523e-4b81-995c-94167e6502bc,ResourceVersion:23455389,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fb977 0xc0022fb978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fba00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fba40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.750: INFO: Pod "nginx-deployment-55fb7cb77f-7rg25" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7rg25,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-7rg25,UID:84cb2152-9e57-4ced-aa61-030f057bf7d3,ResourceVersion:23455351,Generation:0,CreationTimestamp:2020-02-07 14:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fbad7 0xc0022fbad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fbb40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fbb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-07 14:29:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.751: INFO: Pod "nginx-deployment-55fb7cb77f-cgq2t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cgq2t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-cgq2t,UID:b25c70e8-e6a2-4864-89c7-eed768836712,ResourceVersion:23455391,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fbc57 0xc0022fbc58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fbcc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fbcf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.751: INFO: Pod "nginx-deployment-55fb7cb77f-gb6pd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gb6pd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-gb6pd,UID:7290a7f9-6850-4395-9e4e-22021bb48afd,ResourceVersion:23455349,Generation:0,CreationTimestamp:2020-02-07 14:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fbda7 0xc0022fbda8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fbe20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fbe40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-07 14:29:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.751: INFO: Pod "nginx-deployment-55fb7cb77f-m29cw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m29cw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-m29cw,UID:39d4ed93-b84d-438d-b73c-e073f043a942,ResourceVersion:23455383,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc0022fbf17 0xc0022fbf18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022fbfa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022fbfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.751: INFO: Pod "nginx-deployment-55fb7cb77f-mjlnd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mjlnd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-mjlnd,UID:a3a31af6-8ab2-4b6d-93cb-74fa1ac2b5ff,ResourceVersion:23455335,Generation:0,CreationTimestamp:2020-02-07 14:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc002ef4047 0xc002ef4048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef40c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef40e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-07 14:29:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.751: INFO: Pod "nginx-deployment-55fb7cb77f-nj7xn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nj7xn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-nj7xn,UID:a766ab72-c4a2-45a4-be3d-00f97b3badf0,ResourceVersion:23455390,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc002ef41b7 0xc002ef41b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.752: INFO: Pod "nginx-deployment-55fb7cb77f-pkbw4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pkbw4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-pkbw4,UID:71ca0603-ffb6-498b-ac2d-bd32eedd1ae4,ResourceVersion:23455393,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc002ef42d7 0xc002ef42d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.752: INFO: Pod "nginx-deployment-55fb7cb77f-qzl29" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qzl29,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-qzl29,UID:26236c91-e992-4537-9e2a-1d998dc3ca88,ResourceVersion:23455327,Generation:0,CreationTimestamp:2020-02-07 14:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc002ef4417 0xc002ef4418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef44a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-07 14:29:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.752: INFO: Pod "nginx-deployment-55fb7cb77f-wr4sp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wr4sp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-wr4sp,UID:e864c989-3407-4c45-917b-49e21ac14f94,ResourceVersion:23455388,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc002ef4577 0xc002ef4578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef45e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.752: INFO: Pod "nginx-deployment-55fb7cb77f-xdpdr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xdpdr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-55fb7cb77f-xdpdr,UID:2c2fb762-7be4-4393-86c8-c138dbcb38a5,ResourceVersion:23455415,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 78112f44-6082-4a20-8a91-41147414bc7d 0xc002ef4687 0xc002ef4688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef46f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-07 14:29:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.752: INFO: Pod "nginx-deployment-7b8c6f4498-2tvvj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2tvvj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-2tvvj,UID:a439b018-47a7-49fc-9f08-c1af4c639ad1,ResourceVersion:23455423,Generation:0,CreationTimestamp:2020-02-07 14:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef47e7 0xc002ef47e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-07 14:29:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.753: INFO: Pod "nginx-deployment-7b8c6f4498-49nwp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-49nwp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-49nwp,UID:31dd82a9-32a4-4ba0-8002-0807f52d8f66,ResourceVersion:23455405,Generation:0,CreationTimestamp:2020-02-07 14:29:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef4947 0xc002ef4948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef49b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef49d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.753: INFO: Pod "nginx-deployment-7b8c6f4498-6sgnq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6sgnq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-6sgnq,UID:8a417978-94a2-4fff-81ff-449fdf9163a4,ResourceVersion:23455407,Generation:0,CreationTimestamp:2020-02-07 14:29:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef4a57 0xc002ef4a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.753: INFO: Pod "nginx-deployment-7b8c6f4498-758kt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-758kt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-758kt,UID:a78a03df-d4b3-42b1-8495-6cfc14f81525,ResourceVersion:23455435,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef4b77 0xc002ef4b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:30:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:30:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-07 14:30:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.753: INFO: Pod "nginx-deployment-7b8c6f4498-982t7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-982t7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-982t7,UID:9696c39d-7dbb-4322-9d1e-9179f7e15a15,ResourceVersion:23455392,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef4cd7 0xc002ef4cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.754: INFO: Pod "nginx-deployment-7b8c6f4498-9r95p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9r95p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-9r95p,UID:44908246-26f5-40bb-83a9-11b05205d70b,ResourceVersion:23455397,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef4df7 0xc002ef4df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.754: INFO: Pod "nginx-deployment-7b8c6f4498-d69tn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d69tn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-d69tn,UID:e163e7fd-4464-48e5-824f-58778b19bdc2,ResourceVersion:23455382,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef4f07 0xc002ef4f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef4f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef4fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.754: INFO: Pod "nginx-deployment-7b8c6f4498-dfthx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dfthx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-dfthx,UID:f7457a40-9c35-4382-9932-70c2e1ed22a0,ResourceVersion:23455275,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5027 0xc002ef5028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef50a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef50c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5d0f9e885b5ec1f44d8f0296a60e1497acb18cab35928c400b9b777220164e3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.754: INFO: Pod "nginx-deployment-7b8c6f4498-dh6t6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dh6t6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-dh6t6,UID:b9cd1ac2-05fa-4d0c-872c-41b176665ad1,ResourceVersion:23455279,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5197 0xc002ef5198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1525be2e195292dabca32753f445eeb9f2896a5e8e72ce905ff28d90ef65e64f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.754: INFO: Pod "nginx-deployment-7b8c6f4498-hkcc2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hkcc2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-hkcc2,UID:437219b0-4b1e-4ee1-ac4e-c8b7d0c9a3d5,ResourceVersion:23455242,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5307 0xc002ef5308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ac58478324d7b94e500a0093f92b4ef249abdf4c1fde7d781bec08e87fc6c66b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.755: INFO: Pod "nginx-deployment-7b8c6f4498-mgdx6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mgdx6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-mgdx6,UID:f82e8e5f-7016-42ec-8646-7897a98ae78a,ResourceVersion:23455254,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5467 0xc002ef5468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef54e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://891da941b41c1fe8f78da905039917344d8f06d2022deb4e09d60e43b59cfdee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.755: INFO: Pod "nginx-deployment-7b8c6f4498-nf8fv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nf8fv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-nf8fv,UID:6f91e46b-981e-49e4-82dd-ea105352aa4d,ResourceVersion:23455282,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef55d7 0xc002ef55d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://81d3171fd49a4ef17b81bee9edd217ffda2968a51e198a362885030d2a581ac3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.755: INFO: Pod "nginx-deployment-7b8c6f4498-phfsk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-phfsk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-phfsk,UID:f95843d7-ecf1-4c46-9001-cab6c85203dd,ResourceVersion:23455421,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5747 0xc002ef5748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef57b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef57d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-07 14:29:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.755: INFO: Pod "nginx-deployment-7b8c6f4498-qcj26" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qcj26,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-qcj26,UID:282bf98a-d11c-43c0-b7b6-7b11b10d3c59,ResourceVersion:23455434,Generation:0,CreationTimestamp:2020-02-07 14:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5897 0xc002ef5898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-07 14:29:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.755: INFO: Pod "nginx-deployment-7b8c6f4498-r9q29" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r9q29,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-r9q29,UID:7b371c16-db78-4cd0-a73b-e687914b663e,ResourceVersion:23455260,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef59e7 0xc002ef59e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8b3f89bbebce6ca9c975003fadf0679cf296c6decfc046d961b4e9313a6c6531}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.755: INFO: Pod "nginx-deployment-7b8c6f4498-rsdp6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rsdp6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-rsdp6,UID:138dfcba-a8b0-4395-bba9-a1a754f5fe55,ResourceVersion:23455291,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5b47 0xc002ef5b48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5bc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5f774443ff35c8566e03b99f297ceb80d184878bf89c3b0bb3482c449b23a7ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.756: INFO: Pod "nginx-deployment-7b8c6f4498-tqjs2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tqjs2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-tqjs2,UID:72d56c84-e048-4796-8aa9-ef1abe5562c3,ResourceVersion:23455406,Generation:0,CreationTimestamp:2020-02-07 14:29:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5cb7 0xc002ef5cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.756: INFO: Pod "nginx-deployment-7b8c6f4498-v2zbs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v2zbs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-v2zbs,UID:17124c8c-e288-45f0-8596-f72a3291647a,ResourceVersion:23455408,Generation:0,CreationTimestamp:2020-02-07 14:29:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5dd7 0xc002ef5dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.756: INFO: Pod "nginx-deployment-7b8c6f4498-wstgw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wstgw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-wstgw,UID:b07c8a11-b35c-48a7-a5b6-f39f1025033a,ResourceVersion:23455403,Generation:0,CreationTimestamp:2020-02-07 14:29:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5ee7 0xc002ef5ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef5f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef5f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 14:30:07.757: INFO: Pod "nginx-deployment-7b8c6f4498-x69k7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x69k7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5337,SelfLink:/api/v1/namespaces/deployment-5337/pods/nginx-deployment-7b8c6f4498-x69k7,UID:2f299119-191c-4b3d-8847-59c25cc188f7,ResourceVersion:23455257,Generation:0,CreationTimestamp:2020-02-07 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c6eae662-41cd-4055-9bce-5f74915aa344 0xc002ef5ff7 0xc002ef5ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5k4mn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5k4mn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5k4mn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cf2140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cf2180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-07 14:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 14:29:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f74fe2e7006c65bf1d70266ee38973db1046bb73e7fbe0f89aeb62c40984f7e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:30:07.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5337" for this suite.
Feb  7 14:31:21.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:31:22.475: INFO: namespace deployment-5337 deletion completed in 1m13.78929777s

• [SLOW TEST:120.685 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:31:22.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-fb82ce6e-2a18-429e-b6d5-a4d0ab6f0aa7
STEP: Creating a pod to test consume secrets
Feb  7 14:31:22.664: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f" in namespace "projected-4413" to be "success or failure"
Feb  7 14:31:22.684: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.430711ms
Feb  7 14:31:24.691: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026682791s
Feb  7 14:31:26.701: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036558804s
Feb  7 14:31:28.991: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.326484498s
Feb  7 14:31:31.000: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335842719s
Feb  7 14:31:33.007: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.343284939s
Feb  7 14:31:35.013: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.349009021s
STEP: Saw pod success
Feb  7 14:31:35.013: INFO: Pod "pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f" satisfied condition "success or failure"
Feb  7 14:31:35.016: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 14:31:35.257: INFO: Waiting for pod pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f to disappear
Feb  7 14:31:35.279: INFO: Pod pod-projected-secrets-461c82a3-f82e-4a9b-a476-018e0dbb8b5f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:31:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4413" for this suite.
Feb  7 14:31:41.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:31:41.587: INFO: namespace projected-4413 deletion completed in 6.29910733s

• [SLOW TEST:19.111 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:31:41.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-fc661e3f-6dd0-4d2f-b4c9-84c3704a5504
STEP: Creating a pod to test consume secrets
Feb  7 14:31:41.920: INFO: Waiting up to 5m0s for pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e" in namespace "secrets-3374" to be "success or failure"
Feb  7 14:31:41.931: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.447361ms
Feb  7 14:31:43.939: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019159798s
Feb  7 14:31:45.950: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029623001s
Feb  7 14:31:47.957: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036863855s
Feb  7 14:31:49.963: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043121089s
Feb  7 14:31:51.971: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050624386s
STEP: Saw pod success
Feb  7 14:31:51.971: INFO: Pod "pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e" satisfied condition "success or failure"
Feb  7 14:31:51.982: INFO: Trying to get logs from node iruya-node pod pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e container secret-volume-test: 
STEP: delete the pod
Feb  7 14:31:52.221: INFO: Waiting for pod pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e to disappear
Feb  7 14:31:52.230: INFO: Pod pod-secrets-a6667bca-620a-44bc-ba2c-dd97e8e2077e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:31:52.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3374" for this suite.
Feb  7 14:31:58.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:31:58.477: INFO: namespace secrets-3374 deletion completed in 6.239844286s

• [SLOW TEST:16.890 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:31:58.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:31:58.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a" in namespace "projected-7902" to be "success or failure"
Feb  7 14:31:58.666: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.354103ms
Feb  7 14:32:00.676: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019840369s
Feb  7 14:32:02.690: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03385345s
Feb  7 14:32:04.708: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052242209s
Feb  7 14:32:06.714: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a": Phase="Running", Reason="", readiness=true. Elapsed: 8.058109756s
Feb  7 14:32:08.734: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078691717s
STEP: Saw pod success
Feb  7 14:32:08.735: INFO: Pod "downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a" satisfied condition "success or failure"
Feb  7 14:32:08.739: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a container client-container: 
STEP: delete the pod
Feb  7 14:32:08.948: INFO: Waiting for pod downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a to disappear
Feb  7 14:32:08.956: INFO: Pod downwardapi-volume-ee17a5c9-55e1-4892-a72c-5bd9dc9c1f2a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:32:08.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7902" for this suite.
Feb  7 14:32:14.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:32:15.079: INFO: namespace projected-7902 deletion completed in 6.119052476s

• [SLOW TEST:16.602 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:32:15.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fcb2afd2-af16-4be4-9751-df05c41b87ca
STEP: Creating a pod to test consume secrets
Feb  7 14:32:15.221: INFO: Waiting up to 5m0s for pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548" in namespace "secrets-2716" to be "success or failure"
Feb  7 14:32:15.235: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548": Phase="Pending", Reason="", readiness=false. Elapsed: 14.304261ms
Feb  7 14:32:17.245: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023724204s
Feb  7 14:32:19.323: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102100907s
Feb  7 14:32:21.331: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109985468s
Feb  7 14:32:23.339: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118051524s
Feb  7 14:32:25.346: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124822473s
STEP: Saw pod success
Feb  7 14:32:25.346: INFO: Pod "pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548" satisfied condition "success or failure"
Feb  7 14:32:25.351: INFO: Trying to get logs from node iruya-node pod pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548 container secret-env-test: 
STEP: delete the pod
Feb  7 14:32:25.494: INFO: Waiting for pod pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548 to disappear
Feb  7 14:32:25.516: INFO: Pod pod-secrets-28efcbf2-8df3-42b7-b056-5f711cc6c548 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:32:25.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2716" for this suite.
Feb  7 14:32:31.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:32:31.743: INFO: namespace secrets-2716 deletion completed in 6.219773983s

• [SLOW TEST:16.663 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:32:31.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:32:31.862: INFO: Creating ReplicaSet my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0
Feb  7 14:32:31.877: INFO: Pod name my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0: Found 0 pods out of 1
Feb  7 14:32:36.890: INFO: Pod name my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0: Found 1 pods out of 1
Feb  7 14:32:36.890: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0" is running
Feb  7 14:32:40.903: INFO: Pod "my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0-9pwvj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 14:32:31 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 14:32:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 14:32:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 14:32:31 +0000 UTC Reason: Message:}])
Feb  7 14:32:40.903: INFO: Trying to dial the pod
Feb  7 14:32:45.938: INFO: Controller my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0: Got expected result from replica 1 [my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0-9pwvj]: "my-hostname-basic-29952f7a-fc3f-4846-ba92-349bc1110ff0-9pwvj", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:32:45.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1890" for this suite.
Feb  7 14:32:51.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:32:52.080: INFO: namespace replicaset-1890 deletion completed in 6.137067906s

• [SLOW TEST:20.337 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:32:52.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:32:52.144: INFO: Creating deployment "test-recreate-deployment"
Feb  7 14:32:52.173: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  7 14:32:52.252: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  7 14:32:54.264: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  7 14:32:54.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 14:32:56.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 14:32:58.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 14:33:00.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716682772, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 14:33:02.283: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  7 14:33:02.295: INFO: Updating deployment test-recreate-deployment
Feb  7 14:33:02.295: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  7 14:33:02.748: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-3769,SelfLink:/apis/apps/v1/namespaces/deployment-3769/deployments/test-recreate-deployment,UID:34ece094-a684-4664-87ad-defedc174696,ResourceVersion:23456050,Generation:2,CreationTimestamp:2020-02-07 14:32:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-07 14:33:02 +0000 UTC 2020-02-07 14:33:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-07 14:33:02 +0000 UTC 2020-02-07 14:32:52 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  7 14:33:02.752: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-3769,SelfLink:/apis/apps/v1/namespaces/deployment-3769/replicasets/test-recreate-deployment-5c8c9cc69d,UID:757668d2-b63e-4631-8123-b2d0e3af2cd5,ResourceVersion:23456047,Generation:1,CreationTimestamp:2020-02-07 14:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 34ece094-a684-4664-87ad-defedc174696 0xc0027d2e87 0xc0027d2e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 14:33:02.752: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  7 14:33:02.752: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-3769,SelfLink:/apis/apps/v1/namespaces/deployment-3769/replicasets/test-recreate-deployment-6df85df6b9,UID:53a21d92-2fee-4799-97c2-75b140e2ba17,ResourceVersion:23456038,Generation:2,CreationTimestamp:2020-02-07 14:32:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 34ece094-a684-4664-87ad-defedc174696 0xc0027d3017 0xc0027d3018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 14:33:02.756: INFO: Pod "test-recreate-deployment-5c8c9cc69d-l5vnr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-l5vnr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-3769,SelfLink:/api/v1/namespaces/deployment-3769/pods/test-recreate-deployment-5c8c9cc69d-l5vnr,UID:3a12b471-5f6f-44eb-83ec-d553919778d8,ResourceVersion:23456046,Generation:0,CreationTimestamp:2020-02-07 14:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 757668d2-b63e-4631-8123-b2d0e3af2cd5 0xc0032361d7 0xc0032361d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2tw5q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tw5q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2tw5q true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003236250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003236270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 14:33:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:33:02.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3769" for this suite.
Feb  7 14:33:08.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:33:08.855: INFO: namespace deployment-3769 deletion completed in 6.094798059s

• [SLOW TEST:16.775 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:33:08.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  7 14:33:08.995: INFO: Waiting up to 5m0s for pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f" in namespace "downward-api-4479" to be "success or failure"
Feb  7 14:33:09.223: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Pending", Reason="", readiness=false. Elapsed: 227.956586ms
Feb  7 14:33:11.230: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23538915s
Feb  7 14:33:13.243: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247987724s
Feb  7 14:33:15.250: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254932609s
Feb  7 14:33:17.258: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263163865s
Feb  7 14:33:19.293: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.297729063s
Feb  7 14:33:21.551: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.556326384s
STEP: Saw pod success
Feb  7 14:33:21.551: INFO: Pod "downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f" satisfied condition "success or failure"
Feb  7 14:33:21.557: INFO: Trying to get logs from node iruya-node pod downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f container dapi-container: 
STEP: delete the pod
Feb  7 14:33:21.629: INFO: Waiting for pod downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f to disappear
Feb  7 14:33:21.637: INFO: Pod downward-api-c513f3a2-a64c-4f93-8ad9-9c25a8e3ab9f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:33:21.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4479" for this suite.
Feb  7 14:33:27.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:33:27.928: INFO: namespace downward-api-4479 deletion completed in 6.280910456s

• [SLOW TEST:19.072 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:33:27.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 14:33:36.209: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:33:36.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2886" for this suite.
Feb  7 14:33:42.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:33:42.432: INFO: namespace container-runtime-2886 deletion completed in 6.193775424s

• [SLOW TEST:14.504 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:33:42.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 14:33:50.802: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:33:50.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5974" for this suite.
Feb  7 14:33:56.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:33:57.025: INFO: namespace container-runtime-5974 deletion completed in 6.183796058s

• [SLOW TEST:14.593 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:33:57.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb  7 14:33:57.092: INFO: Waiting up to 5m0s for pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207" in namespace "var-expansion-2108" to be "success or failure"
Feb  7 14:33:57.120: INFO: Pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207": Phase="Pending", Reason="", readiness=false. Elapsed: 28.480526ms
Feb  7 14:33:59.131: INFO: Pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038963362s
Feb  7 14:34:01.141: INFO: Pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049249555s
Feb  7 14:34:03.148: INFO: Pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056132843s
Feb  7 14:34:05.164: INFO: Pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071967283s
STEP: Saw pod success
Feb  7 14:34:05.164: INFO: Pod "var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207" satisfied condition "success or failure"
Feb  7 14:34:05.170: INFO: Trying to get logs from node iruya-node pod var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207 container dapi-container: 
STEP: delete the pod
Feb  7 14:34:05.240: INFO: Waiting for pod var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207 to disappear
Feb  7 14:34:05.244: INFO: Pod var-expansion-095f53a9-c598-444e-9a31-ebfae7a17207 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:34:05.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2108" for this suite.
Feb  7 14:34:11.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:34:11.384: INFO: namespace var-expansion-2108 deletion completed in 6.133278004s

• [SLOW TEST:14.359 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:34:11.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:35:11.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1385" for this suite.
Feb  7 14:35:33.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:35:33.806: INFO: namespace container-probe-1385 deletion completed in 22.173807708s

• [SLOW TEST:82.421 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:35:33.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  7 14:35:33.970: INFO: Waiting up to 5m0s for pod "pod-716319ec-0b77-4310-8532-bed24fe01a75" in namespace "emptydir-4081" to be "success or failure"
Feb  7 14:35:34.004: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75": Phase="Pending", Reason="", readiness=false. Elapsed: 33.631988ms
Feb  7 14:35:36.007: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03701347s
Feb  7 14:35:38.016: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045712911s
Feb  7 14:35:40.026: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055723783s
Feb  7 14:35:42.032: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062492529s
Feb  7 14:35:44.163: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192979372s
STEP: Saw pod success
Feb  7 14:35:44.163: INFO: Pod "pod-716319ec-0b77-4310-8532-bed24fe01a75" satisfied condition "success or failure"
Feb  7 14:35:44.167: INFO: Trying to get logs from node iruya-node pod pod-716319ec-0b77-4310-8532-bed24fe01a75 container test-container: 
STEP: delete the pod
Feb  7 14:35:44.225: INFO: Waiting for pod pod-716319ec-0b77-4310-8532-bed24fe01a75 to disappear
Feb  7 14:35:44.342: INFO: Pod pod-716319ec-0b77-4310-8532-bed24fe01a75 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:35:44.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4081" for this suite.
Feb  7 14:35:50.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:35:50.577: INFO: namespace emptydir-4081 deletion completed in 6.229608356s

• [SLOW TEST:16.770 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:35:50.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-1e8a78ec-36b8-476d-9bc0-88947541f47e
STEP: Creating a pod to test consume configMaps
Feb  7 14:35:50.747: INFO: Waiting up to 5m0s for pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42" in namespace "configmap-578" to be "success or failure"
Feb  7 14:35:50.769: INFO: Pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42": Phase="Pending", Reason="", readiness=false. Elapsed: 21.861314ms
Feb  7 14:35:52.779: INFO: Pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031090514s
Feb  7 14:35:54.786: INFO: Pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038089282s
Feb  7 14:35:56.793: INFO: Pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045670234s
Feb  7 14:35:58.807: INFO: Pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059495294s
STEP: Saw pod success
Feb  7 14:35:58.807: INFO: Pod "pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42" satisfied condition "success or failure"
Feb  7 14:35:58.810: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42 container configmap-volume-test: 
STEP: delete the pod
Feb  7 14:35:58.944: INFO: Waiting for pod pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42 to disappear
Feb  7 14:35:59.051: INFO: Pod pod-configmaps-e22fc9be-53a0-4d06-8cb1-42d0de089d42 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:35:59.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-578" for this suite.
Feb  7 14:36:05.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:36:05.246: INFO: namespace configmap-578 deletion completed in 6.185330515s

• [SLOW TEST:14.669 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:36:05.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  7 14:36:05.308: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix767852522/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:36:05.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8758" for this suite.
Feb  7 14:36:11.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:36:11.580: INFO: namespace kubectl-8758 deletion completed in 6.163184209s

• [SLOW TEST:6.334 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:36:11.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-6e72e22f-4d44-4aa1-9350-cf3ba87ed0e7
STEP: Creating secret with name secret-projected-all-test-volume-072642f3-6aa0-41f3-accf-3b0733e5a9e2
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  7 14:36:11.767: INFO: Waiting up to 5m0s for pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f" in namespace "projected-2226" to be "success or failure"
Feb  7 14:36:11.886: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 118.343631ms
Feb  7 14:36:13.977: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20925984s
Feb  7 14:36:15.988: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22064777s
Feb  7 14:36:18.003: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236002356s
Feb  7 14:36:20.025: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257918287s
Feb  7 14:36:22.032: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.26514127s
STEP: Saw pod success
Feb  7 14:36:22.032: INFO: Pod "projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f" satisfied condition "success or failure"
Feb  7 14:36:22.036: INFO: Trying to get logs from node iruya-node pod projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f container projected-all-volume-test: 
STEP: delete the pod
Feb  7 14:36:22.085: INFO: Waiting for pod projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f to disappear
Feb  7 14:36:22.093: INFO: Pod projected-volume-4da51f5b-67c3-4023-b1e8-e2a9a116f22f no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:36:22.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2226" for this suite.
Feb  7 14:36:28.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:36:28.344: INFO: namespace projected-2226 deletion completed in 6.246479426s

• [SLOW TEST:16.764 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:36:28.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  7 14:36:28.407: INFO: Waiting up to 5m0s for pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0" in namespace "emptydir-4328" to be "success or failure"
Feb  7 14:36:28.463: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.089787ms
Feb  7 14:36:30.477: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070211797s
Feb  7 14:36:32.487: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079485603s
Feb  7 14:36:34.534: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127302936s
Feb  7 14:36:36.591: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183756528s
Feb  7 14:36:38.601: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.19416397s
STEP: Saw pod success
Feb  7 14:36:38.601: INFO: Pod "pod-066da2ba-303f-4e25-9a06-355aac1dd7b0" satisfied condition "success or failure"
Feb  7 14:36:38.608: INFO: Trying to get logs from node iruya-node pod pod-066da2ba-303f-4e25-9a06-355aac1dd7b0 container test-container: 
STEP: delete the pod
Feb  7 14:36:38.759: INFO: Waiting for pod pod-066da2ba-303f-4e25-9a06-355aac1dd7b0 to disappear
Feb  7 14:36:38.884: INFO: Pod pod-066da2ba-303f-4e25-9a06-355aac1dd7b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:36:38.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4328" for this suite.
Feb  7 14:36:44.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:36:45.106: INFO: namespace emptydir-4328 deletion completed in 6.21353613s

• [SLOW TEST:16.761 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:36:45.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  7 14:36:45.289: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8646,SelfLink:/api/v1/namespaces/watch-8646/configmaps/e2e-watch-test-label-changed,UID:e85dd2ca-d880-4fab-9ab7-176ebbd1ef23,ResourceVersion:23456600,Generation:0,CreationTimestamp:2020-02-07 14:36:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 14:36:45.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8646,SelfLink:/api/v1/namespaces/watch-8646/configmaps/e2e-watch-test-label-changed,UID:e85dd2ca-d880-4fab-9ab7-176ebbd1ef23,ResourceVersion:23456601,Generation:0,CreationTimestamp:2020-02-07 14:36:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  7 14:36:45.289: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8646,SelfLink:/api/v1/namespaces/watch-8646/configmaps/e2e-watch-test-label-changed,UID:e85dd2ca-d880-4fab-9ab7-176ebbd1ef23,ResourceVersion:23456602,Generation:0,CreationTimestamp:2020-02-07 14:36:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  7 14:36:55.358: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8646,SelfLink:/api/v1/namespaces/watch-8646/configmaps/e2e-watch-test-label-changed,UID:e85dd2ca-d880-4fab-9ab7-176ebbd1ef23,ResourceVersion:23456617,Generation:0,CreationTimestamp:2020-02-07 14:36:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 14:36:55.358: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8646,SelfLink:/api/v1/namespaces/watch-8646/configmaps/e2e-watch-test-label-changed,UID:e85dd2ca-d880-4fab-9ab7-176ebbd1ef23,ResourceVersion:23456618,Generation:0,CreationTimestamp:2020-02-07 14:36:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  7 14:36:55.359: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8646,SelfLink:/api/v1/namespaces/watch-8646/configmaps/e2e-watch-test-label-changed,UID:e85dd2ca-d880-4fab-9ab7-176ebbd1ef23,ResourceVersion:23456619,Generation:0,CreationTimestamp:2020-02-07 14:36:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:36:55.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8646" for this suite.
Feb  7 14:37:01.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:37:01.567: INFO: namespace watch-8646 deletion completed in 6.156998469s

• [SLOW TEST:16.460 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:37:01.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-e920a9d4-2dcf-4e49-87e4-14867cd921b7
STEP: Creating secret with name s-test-opt-upd-da01d470-b90b-4a71-bebf-69b3b57987fb
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-e920a9d4-2dcf-4e49-87e4-14867cd921b7
STEP: Updating secret s-test-opt-upd-da01d470-b90b-4a71-bebf-69b3b57987fb
STEP: Creating secret with name s-test-opt-create-c60dc4c9-9aac-46ab-82ba-f7dc0b7ed769
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:38:41.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8545" for this suite.
Feb  7 14:39:03.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:39:03.823: INFO: namespace secrets-8545 deletion completed in 22.186542371s

• [SLOW TEST:122.256 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:39:03.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7475/configmap-test-20fa960a-07f6-4053-8e60-6864c870f591
STEP: Creating a pod to test consume configMaps
Feb  7 14:39:03.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4" in namespace "configmap-7475" to be "success or failure"
Feb  7 14:39:03.960: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.235375ms
Feb  7 14:39:05.967: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014360743s
Feb  7 14:39:07.980: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027448993s
Feb  7 14:39:09.998: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045547874s
Feb  7 14:39:12.004: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05089636s
Feb  7 14:39:14.009: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055860715s
STEP: Saw pod success
Feb  7 14:39:14.009: INFO: Pod "pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4" satisfied condition "success or failure"
Feb  7 14:39:14.011: INFO: Trying to get logs from node iruya-node pod pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4 container env-test: 
STEP: delete the pod
Feb  7 14:39:14.057: INFO: Waiting for pod pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4 to disappear
Feb  7 14:39:14.064: INFO: Pod pod-configmaps-75da945a-0116-4027-9c3d-0aceb37eebc4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:39:14.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7475" for this suite.
Feb  7 14:39:20.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:39:20.227: INFO: namespace configmap-7475 deletion completed in 6.16000451s

• [SLOW TEST:16.404 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:39:20.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:39:20.390: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  7 14:39:20.404: INFO: Number of nodes with available pods: 0
Feb  7 14:39:20.404: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  7 14:39:20.441: INFO: Number of nodes with available pods: 0
Feb  7 14:39:20.441: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:21.448: INFO: Number of nodes with available pods: 0
Feb  7 14:39:21.448: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:22.451: INFO: Number of nodes with available pods: 0
Feb  7 14:39:22.451: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:23.449: INFO: Number of nodes with available pods: 0
Feb  7 14:39:23.449: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:24.450: INFO: Number of nodes with available pods: 0
Feb  7 14:39:24.450: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:25.451: INFO: Number of nodes with available pods: 0
Feb  7 14:39:25.451: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:26.448: INFO: Number of nodes with available pods: 0
Feb  7 14:39:26.448: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:27.448: INFO: Number of nodes with available pods: 0
Feb  7 14:39:27.448: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:28.452: INFO: Number of nodes with available pods: 1
Feb  7 14:39:28.452: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  7 14:39:28.590: INFO: Number of nodes with available pods: 1
Feb  7 14:39:28.591: INFO: Number of running nodes: 0, number of available pods: 1
Feb  7 14:39:29.601: INFO: Number of nodes with available pods: 0
Feb  7 14:39:29.601: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  7 14:39:29.632: INFO: Number of nodes with available pods: 0
Feb  7 14:39:29.632: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:30.643: INFO: Number of nodes with available pods: 0
Feb  7 14:39:30.643: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:31.643: INFO: Number of nodes with available pods: 0
Feb  7 14:39:31.643: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:32.647: INFO: Number of nodes with available pods: 0
Feb  7 14:39:32.647: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:33.644: INFO: Number of nodes with available pods: 0
Feb  7 14:39:33.644: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:34.640: INFO: Number of nodes with available pods: 0
Feb  7 14:39:34.640: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:35.642: INFO: Number of nodes with available pods: 0
Feb  7 14:39:35.642: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:36.648: INFO: Number of nodes with available pods: 0
Feb  7 14:39:36.648: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:37.640: INFO: Number of nodes with available pods: 0
Feb  7 14:39:37.640: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:38.640: INFO: Number of nodes with available pods: 0
Feb  7 14:39:38.640: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:39.644: INFO: Number of nodes with available pods: 0
Feb  7 14:39:39.645: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:40.644: INFO: Number of nodes with available pods: 0
Feb  7 14:39:40.644: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:41.641: INFO: Number of nodes with available pods: 0
Feb  7 14:39:41.641: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:42.755: INFO: Number of nodes with available pods: 0
Feb  7 14:39:42.755: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:43.649: INFO: Number of nodes with available pods: 0
Feb  7 14:39:43.649: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:44.666: INFO: Number of nodes with available pods: 0
Feb  7 14:39:44.666: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:45.639: INFO: Number of nodes with available pods: 0
Feb  7 14:39:45.639: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:46.643: INFO: Number of nodes with available pods: 0
Feb  7 14:39:46.643: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:47.641: INFO: Number of nodes with available pods: 0
Feb  7 14:39:47.641: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:48.651: INFO: Number of nodes with available pods: 0
Feb  7 14:39:48.651: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:49.642: INFO: Number of nodes with available pods: 0
Feb  7 14:39:49.642: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:50.639: INFO: Number of nodes with available pods: 0
Feb  7 14:39:50.640: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:51.645: INFO: Number of nodes with available pods: 0
Feb  7 14:39:51.645: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:52.655: INFO: Number of nodes with available pods: 0
Feb  7 14:39:52.655: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:53.647: INFO: Number of nodes with available pods: 0
Feb  7 14:39:53.647: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:39:54.644: INFO: Number of nodes with available pods: 1
Feb  7 14:39:54.644: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-462, will wait for the garbage collector to delete the pods
Feb  7 14:39:54.717: INFO: Deleting DaemonSet.extensions daemon-set took: 11.231946ms
Feb  7 14:39:55.018: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265142ms
Feb  7 14:40:06.642: INFO: Number of nodes with available pods: 0
Feb  7 14:40:06.642: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 14:40:06.646: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-462/daemonsets","resourceVersion":"23456997"},"items":null}

Feb  7 14:40:06.648: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-462/pods","resourceVersion":"23456997"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:40:06.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-462" for this suite.
Feb  7 14:40:12.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:40:12.833: INFO: namespace daemonsets-462 deletion completed in 6.133977483s

• [SLOW TEST:52.606 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:40:12.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0207 14:40:44.029721       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 14:40:44.029: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:40:44.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5522" for this suite.
Feb  7 14:40:52.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:40:52.242: INFO: namespace gc-5522 deletion completed in 8.209158835s

• [SLOW TEST:39.409 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:40:52.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-489r
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 14:40:52.490: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-489r" in namespace "subpath-1157" to be "success or failure"
Feb  7 14:40:52.530: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Pending", Reason="", readiness=false. Elapsed: 39.562526ms
Feb  7 14:40:54.785: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294823485s
Feb  7 14:40:56.795: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305281157s
Feb  7 14:40:58.802: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311730834s
Feb  7 14:41:00.808: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317551104s
Feb  7 14:41:02.814: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 10.323770255s
Feb  7 14:41:04.822: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 12.332524935s
Feb  7 14:41:06.830: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 14.339589242s
Feb  7 14:41:08.837: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 16.346629824s
Feb  7 14:41:10.844: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 18.353534953s
Feb  7 14:41:12.864: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 20.37365007s
Feb  7 14:41:14.878: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 22.388133223s
Feb  7 14:41:16.896: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 24.405761659s
Feb  7 14:41:18.911: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 26.421122158s
Feb  7 14:41:20.926: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 28.435981199s
Feb  7 14:41:22.934: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Running", Reason="", readiness=true. Elapsed: 30.443687513s
Feb  7 14:41:24.941: INFO: Pod "pod-subpath-test-downwardapi-489r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.451350413s
STEP: Saw pod success
Feb  7 14:41:24.941: INFO: Pod "pod-subpath-test-downwardapi-489r" satisfied condition "success or failure"
Feb  7 14:41:24.947: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-489r container test-container-subpath-downwardapi-489r: 
STEP: delete the pod
Feb  7 14:41:25.198: INFO: Waiting for pod pod-subpath-test-downwardapi-489r to disappear
Feb  7 14:41:25.204: INFO: Pod pod-subpath-test-downwardapi-489r no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-489r
Feb  7 14:41:25.204: INFO: Deleting pod "pod-subpath-test-downwardapi-489r" in namespace "subpath-1157"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:41:25.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1157" for this suite.
Feb  7 14:41:31.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:41:31.397: INFO: namespace subpath-1157 deletion completed in 6.167323289s

• [SLOW TEST:39.154 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:41:31.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0ec76221-9743-4870-a1f9-02ce313d5215
STEP: Creating a pod to test consume secrets
Feb  7 14:41:31.708: INFO: Waiting up to 5m0s for pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc" in namespace "secrets-6102" to be "success or failure"
Feb  7 14:41:31.732: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.955965ms
Feb  7 14:41:33.742: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03406107s
Feb  7 14:41:35.749: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041285278s
Feb  7 14:41:37.756: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048363985s
Feb  7 14:41:39.764: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05609938s
Feb  7 14:41:41.783: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075479313s
STEP: Saw pod success
Feb  7 14:41:41.784: INFO: Pod "pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc" satisfied condition "success or failure"
Feb  7 14:41:41.799: INFO: Trying to get logs from node iruya-node pod pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc container secret-volume-test: 
STEP: delete the pod
Feb  7 14:41:42.103: INFO: Waiting for pod pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc to disappear
Feb  7 14:41:42.119: INFO: Pod pod-secrets-df401c1c-ab15-4007-a446-3b71eab794cc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:41:42.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6102" for this suite.
Feb  7 14:41:48.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:41:48.490: INFO: namespace secrets-6102 deletion completed in 6.363979266s
STEP: Destroying namespace "secret-namespace-8033" for this suite.
Feb  7 14:41:54.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:41:54.722: INFO: namespace secret-namespace-8033 deletion completed in 6.231565075s

• [SLOW TEST:23.325 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:41:54.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5568
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5568
STEP: Creating statefulset with conflicting port in namespace statefulset-5568
STEP: Waiting until pod test-pod will start running in namespace statefulset-5568
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5568
Feb  7 14:42:08.973: INFO: Observed stateful pod in namespace: statefulset-5568, name: ss-0, uid: cba5b943-7e1a-4c11-9f31-3b0e616f0787, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 14:42:08.976: INFO: Observed stateful pod in namespace: statefulset-5568, name: ss-0, uid: cba5b943-7e1a-4c11-9f31-3b0e616f0787, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 14:42:09.036: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5568
STEP: Removing pod with conflicting port in namespace statefulset-5568
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5568 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  7 14:42:21.140: INFO: Deleting all statefulset in ns statefulset-5568
Feb  7 14:42:21.146: INFO: Scaling statefulset ss to 0
Feb  7 14:42:41.184: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 14:42:41.189: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:42:41.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5568" for this suite.
Feb  7 14:42:47.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:42:47.508: INFO: namespace statefulset-5568 deletion completed in 6.166746199s

• [SLOW TEST:52.784 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:42:47.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-9781e320-f493-40c7-ad8b-7430e4d2c395
STEP: Creating a pod to test consume configMaps
Feb  7 14:42:47.632: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08" in namespace "projected-9564" to be "success or failure"
Feb  7 14:42:47.634: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496763ms
Feb  7 14:42:49.644: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011781001s
Feb  7 14:42:51.663: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030848947s
Feb  7 14:42:53.672: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040445826s
Feb  7 14:42:55.680: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048200878s
Feb  7 14:42:57.688: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056148495s
STEP: Saw pod success
Feb  7 14:42:57.688: INFO: Pod "pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08" satisfied condition "success or failure"
Feb  7 14:42:57.692: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 14:42:57.804: INFO: Waiting for pod pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08 to disappear
Feb  7 14:42:57.811: INFO: Pod pod-projected-configmaps-3aeae4ec-2df0-41b3-8347-86ed01106f08 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:42:57.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9564" for this suite.
Feb  7 14:43:03.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:43:03.979: INFO: namespace projected-9564 deletion completed in 6.16068675s

• [SLOW TEST:16.471 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:43:03.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:43:04.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41" in namespace "projected-7157" to be "success or failure"
Feb  7 14:43:04.180: INFO: Pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 53.336974ms
Feb  7 14:43:06.193: INFO: Pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066260268s
Feb  7 14:43:08.203: INFO: Pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075878468s
Feb  7 14:43:10.212: INFO: Pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085430747s
Feb  7 14:43:12.227: INFO: Pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099829124s
STEP: Saw pod success
Feb  7 14:43:12.227: INFO: Pod "downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41" satisfied condition "success or failure"
Feb  7 14:43:12.231: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41 container client-container: 
STEP: delete the pod
Feb  7 14:43:12.387: INFO: Waiting for pod downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41 to disappear
Feb  7 14:43:12.398: INFO: Pod downwardapi-volume-d6c027ce-079f-4bea-9e76-6e36f89c5e41 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:43:12.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7157" for this suite.
Feb  7 14:43:18.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:43:18.565: INFO: namespace projected-7157 deletion completed in 6.158451749s

• [SLOW TEST:14.585 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:43:18.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  7 14:43:34.761: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:34.813: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:36.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:36.823: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:38.814: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:38.826: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:40.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:40.820: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:42.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:42.824: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:44.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:44.825: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:46.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:46.825: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:48.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:48.825: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:50.814: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:50.827: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:52.814: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:52.836: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:54.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:54.829: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:56.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:56.831: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:43:58.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:43:58.819: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:44:00.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:44:00.829: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:44:02.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:44:02.825: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:44:04.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:44:04.823: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 14:44:06.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 14:44:06.820: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:44:06.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3096" for this suite.
Feb  7 14:44:28.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:44:28.981: INFO: namespace container-lifecycle-hook-3096 deletion completed in 22.117393342s

• [SLOW TEST:70.416 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:44:28.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  7 14:44:29.020: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  7 14:44:29.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6967'
Feb  7 14:44:31.701: INFO: stderr: ""
Feb  7 14:44:31.702: INFO: stdout: "service/redis-slave created\n"
Feb  7 14:44:31.702: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  7 14:44:31.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6967'
Feb  7 14:44:32.253: INFO: stderr: ""
Feb  7 14:44:32.253: INFO: stdout: "service/redis-master created\n"
Feb  7 14:44:32.253: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  7 14:44:32.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6967'
Feb  7 14:44:32.806: INFO: stderr: ""
Feb  7 14:44:32.807: INFO: stdout: "service/frontend created\n"
Feb  7 14:44:32.807: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  7 14:44:32.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6967'
Feb  7 14:44:33.319: INFO: stderr: ""
Feb  7 14:44:33.320: INFO: stdout: "deployment.apps/frontend created\n"
Feb  7 14:44:33.320: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  7 14:44:33.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6967'
Feb  7 14:44:33.890: INFO: stderr: ""
Feb  7 14:44:33.890: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  7 14:44:33.890: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  7 14:44:33.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6967'
Feb  7 14:44:35.500: INFO: stderr: ""
Feb  7 14:44:35.500: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  7 14:44:35.500: INFO: Waiting for all frontend pods to be Running.
Feb  7 14:45:00.552: INFO: Waiting for frontend to serve content.
Feb  7 14:45:00.666: INFO: Trying to add a new entry to the guestbook.
Feb  7 14:45:00.769: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  7 14:45:00.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6967'
Feb  7 14:45:01.030: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:45:01.031: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 14:45:01.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6967'
Feb  7 14:45:01.156: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:45:01.156: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 14:45:01.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6967'
Feb  7 14:45:01.385: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:45:01.385: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 14:45:01.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6967'
Feb  7 14:45:01.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:45:01.531: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 14:45:01.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6967'
Feb  7 14:45:01.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:45:01.653: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 14:45:01.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6967'
Feb  7 14:45:01.811: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:45:01.811: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:45:01.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6967" for this suite.
Feb  7 14:45:49.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:45:50.028: INFO: namespace kubectl-6967 deletion completed in 48.179077463s

• [SLOW TEST:81.047 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:45:50.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 14:45:50.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7646'
Feb  7 14:45:50.628: INFO: stderr: ""
Feb  7 14:45:50.628: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  7 14:46:00.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7646 -o json'
Feb  7 14:46:00.827: INFO: stderr: ""
Feb  7 14:46:00.827: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-07T14:45:50Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-7646\",\n        \"resourceVersion\": \"23458114\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7646/pods/e2e-test-nginx-pod\",\n        \"uid\": \"261af5ef-4258-491b-b9fe-938294632ef1\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-zk5cv\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-zk5cv\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-zk5cv\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T14:45:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T14:45:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T14:45:58Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T14:45:50Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://7e9b1ddedea27427d32d1a62af5844e53f77331e09e920d9a883567f2e46a1d9\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-07T14:45:56Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-07T14:45:50Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  7 14:46:00.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7646'
Feb  7 14:46:01.327: INFO: stderr: ""
Feb  7 14:46:01.327: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  7 14:46:01.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7646'
Feb  7 14:46:09.500: INFO: stderr: ""
Feb  7 14:46:09.501: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:46:09.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7646" for this suite.
Feb  7 14:46:15.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:46:15.793: INFO: namespace kubectl-7646 deletion completed in 6.286611387s

• [SLOW TEST:25.765 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:46:15.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 14:46:15.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3468'
Feb  7 14:46:16.008: INFO: stderr: ""
Feb  7 14:46:16.008: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  7 14:46:16.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3468'
Feb  7 14:46:26.691: INFO: stderr: ""
Feb  7 14:46:26.692: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:46:26.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3468" for this suite.
Feb  7 14:46:32.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:46:32.865: INFO: namespace kubectl-3468 deletion completed in 6.160442842s

• [SLOW TEST:17.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:46:32.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:46:33.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2769" for this suite.
Feb  7 14:46:39.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:46:39.331: INFO: namespace kubelet-test-2769 deletion completed in 6.214378849s

• [SLOW TEST:6.466 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:46:39.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-e8444650-f720-4dcc-af6f-728a530bbe57 in namespace container-probe-3443
Feb  7 14:46:47.484: INFO: Started pod busybox-e8444650-f720-4dcc-af6f-728a530bbe57 in namespace container-probe-3443
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 14:46:47.489: INFO: Initial restart count of pod busybox-e8444650-f720-4dcc-af6f-728a530bbe57 is 0
Feb  7 14:47:45.772: INFO: Restart count of pod container-probe-3443/busybox-e8444650-f720-4dcc-af6f-728a530bbe57 is now 1 (58.2828432s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:47:45.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3443" for this suite.
Feb  7 14:47:51.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:47:52.020: INFO: namespace container-probe-3443 deletion completed in 6.166817666s

• [SLOW TEST:72.688 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:47:52.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 14:47:52.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  7 14:47:52.335: INFO: stderr: ""
Feb  7 14:47:52.335: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:47:52.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8116" for this suite.
Feb  7 14:47:58.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:47:58.501: INFO: namespace kubectl-8116 deletion completed in 6.159716062s

• [SLOW TEST:6.481 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:47:58.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-20d56f1e-2fbc-4c48-9f09-5cf3fd067576
STEP: Creating a pod to test consume configMaps
Feb  7 14:47:58.697: INFO: Waiting up to 5m0s for pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304" in namespace "configmap-3499" to be "success or failure"
Feb  7 14:47:58.709: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304": Phase="Pending", Reason="", readiness=false. Elapsed: 12.428442ms
Feb  7 14:48:00.714: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017662632s
Feb  7 14:48:02.723: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026886996s
Feb  7 14:48:04.736: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039677901s
Feb  7 14:48:06.750: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053579767s
Feb  7 14:48:08.762: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064941296s
STEP: Saw pod success
Feb  7 14:48:08.762: INFO: Pod "pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304" satisfied condition "success or failure"
Feb  7 14:48:08.767: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304 container configmap-volume-test: 
STEP: delete the pod
Feb  7 14:48:08.911: INFO: Waiting for pod pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304 to disappear
Feb  7 14:48:08.920: INFO: Pod pod-configmaps-3617b2bb-68f6-4bf9-90a4-88c58fb9e304 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:48:08.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3499" for this suite.
Feb  7 14:48:14.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:48:15.050: INFO: namespace configmap-3499 deletion completed in 6.123909184s

• [SLOW TEST:16.548 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:48:15.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  7 14:48:15.140: INFO: Waiting up to 5m0s for pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866" in namespace "downward-api-4121" to be "success or failure"
Feb  7 14:48:15.165: INFO: Pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866": Phase="Pending", Reason="", readiness=false. Elapsed: 25.080088ms
Feb  7 14:48:17.170: INFO: Pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030239544s
Feb  7 14:48:19.181: INFO: Pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040997432s
Feb  7 14:48:21.193: INFO: Pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053179953s
Feb  7 14:48:23.200: INFO: Pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060074495s
STEP: Saw pod success
Feb  7 14:48:23.200: INFO: Pod "downward-api-55af8e1e-45dc-4742-be23-f1795022d866" satisfied condition "success or failure"
Feb  7 14:48:23.203: INFO: Trying to get logs from node iruya-node pod downward-api-55af8e1e-45dc-4742-be23-f1795022d866 container dapi-container: 
STEP: delete the pod
Feb  7 14:48:23.345: INFO: Waiting for pod downward-api-55af8e1e-45dc-4742-be23-f1795022d866 to disappear
Feb  7 14:48:23.357: INFO: Pod downward-api-55af8e1e-45dc-4742-be23-f1795022d866 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:48:23.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4121" for this suite.
Feb  7 14:48:29.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:48:29.537: INFO: namespace downward-api-4121 deletion completed in 6.175218491s

• [SLOW TEST:14.487 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:48:29.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  7 14:48:29.615: INFO: Waiting up to 5m0s for pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8" in namespace "containers-1291" to be "success or failure"
Feb  7 14:48:29.620: INFO: Pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.876671ms
Feb  7 14:48:31.626: INFO: Pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010948113s
Feb  7 14:48:33.688: INFO: Pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072440707s
Feb  7 14:48:35.695: INFO: Pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079267068s
Feb  7 14:48:37.704: INFO: Pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088319623s
STEP: Saw pod success
Feb  7 14:48:37.704: INFO: Pod "client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8" satisfied condition "success or failure"
Feb  7 14:48:37.708: INFO: Trying to get logs from node iruya-node pod client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8 container test-container: 
STEP: delete the pod
Feb  7 14:48:37.767: INFO: Waiting for pod client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8 to disappear
Feb  7 14:48:37.772: INFO: Pod client-containers-b969ff2b-8e76-40c5-8eeb-e30ea8b896d8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:48:37.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1291" for this suite.
Feb  7 14:48:43.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:48:43.977: INFO: namespace containers-1291 deletion completed in 6.199746515s

• [SLOW TEST:14.439 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:48:43.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-118.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-118.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 14:48:56.165: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.176: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.185: INFO: Unable to read wheezy_udp@PodARecord from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.195: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.203: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.212: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.220: INFO: Unable to read jessie_udp@PodARecord from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.226: INFO: Unable to read jessie_tcp@PodARecord from pod dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d: the server could not find the requested resource (get pods dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d)
Feb  7 14:48:56.226: INFO: Lookups using dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  7 14:49:01.281: INFO: DNS probes using dns-118/dns-test-f47785c3-fba8-4cfa-8bb9-67b6dd40178d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:49:01.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-118" for this suite.
Feb  7 14:49:09.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:49:09.535: INFO: namespace dns-118 deletion completed in 8.185271347s

• [SLOW TEST:25.558 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:49:09.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  7 14:49:09.607: INFO: namespace kubectl-7388
Feb  7 14:49:09.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7388'
Feb  7 14:49:10.177: INFO: stderr: ""
Feb  7 14:49:10.178: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  7 14:49:11.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:11.198: INFO: Found 0 / 1
Feb  7 14:49:12.188: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:12.188: INFO: Found 0 / 1
Feb  7 14:49:13.188: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:13.188: INFO: Found 0 / 1
Feb  7 14:49:14.190: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:14.190: INFO: Found 0 / 1
Feb  7 14:49:15.228: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:15.228: INFO: Found 0 / 1
Feb  7 14:49:16.238: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:16.238: INFO: Found 0 / 1
Feb  7 14:49:17.206: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:17.206: INFO: Found 1 / 1
Feb  7 14:49:17.206: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 14:49:17.216: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 14:49:17.217: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 14:49:17.217: INFO: wait on redis-master startup in kubectl-7388 
Feb  7 14:49:17.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ft2h7 redis-master --namespace=kubectl-7388'
Feb  7 14:49:17.409: INFO: stderr: ""
Feb  7 14:49:17.409: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Feb 14:49:16.706 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Feb 14:49:16.707 # Server started, Redis version 3.2.12\n1:M 07 Feb 14:49:16.707 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Feb 14:49:16.707 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  7 14:49:17.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7388'
Feb  7 14:49:17.544: INFO: stderr: ""
Feb  7 14:49:17.544: INFO: stdout: "service/rm2 exposed\n"
Feb  7 14:49:17.603: INFO: Service rm2 in namespace kubectl-7388 found.
STEP: exposing service
Feb  7 14:49:19.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7388'
Feb  7 14:49:19.816: INFO: stderr: ""
Feb  7 14:49:19.816: INFO: stdout: "service/rm3 exposed\n"
Feb  7 14:49:19.906: INFO: Service rm3 in namespace kubectl-7388 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:49:21.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7388" for this suite.
Feb  7 14:49:43.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:49:44.078: INFO: namespace kubectl-7388 deletion completed in 22.154057343s

• [SLOW TEST:34.542 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:49:44.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-d9bs
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 14:49:44.198: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d9bs" in namespace "subpath-4394" to be "success or failure"
Feb  7 14:49:44.215: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.434747ms
Feb  7 14:49:46.223: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025133009s
Feb  7 14:49:48.234: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035518851s
Feb  7 14:49:50.241: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043123312s
Feb  7 14:49:52.248: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 8.049556806s
Feb  7 14:49:54.255: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 10.056864405s
Feb  7 14:49:56.265: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 12.066969439s
Feb  7 14:49:58.275: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 14.076316715s
Feb  7 14:50:00.282: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 16.083552491s
Feb  7 14:50:02.288: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 18.0892852s
Feb  7 14:50:04.295: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 20.096958639s
Feb  7 14:50:06.305: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 22.106894621s
Feb  7 14:50:08.320: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 24.121939584s
Feb  7 14:50:10.331: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 26.132753956s
Feb  7 14:50:12.351: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Running", Reason="", readiness=true. Elapsed: 28.152480512s
Feb  7 14:50:14.363: INFO: Pod "pod-subpath-test-projected-d9bs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.164999264s
STEP: Saw pod success
Feb  7 14:50:14.363: INFO: Pod "pod-subpath-test-projected-d9bs" satisfied condition "success or failure"
Feb  7 14:50:14.368: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-d9bs container test-container-subpath-projected-d9bs: 
STEP: delete the pod
Feb  7 14:50:14.573: INFO: Waiting for pod pod-subpath-test-projected-d9bs to disappear
Feb  7 14:50:14.582: INFO: Pod pod-subpath-test-projected-d9bs no longer exists
STEP: Deleting pod pod-subpath-test-projected-d9bs
Feb  7 14:50:14.582: INFO: Deleting pod "pod-subpath-test-projected-d9bs" in namespace "subpath-4394"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:50:14.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4394" for this suite.
Feb  7 14:50:20.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:50:20.757: INFO: namespace subpath-4394 deletion completed in 6.165708914s

• [SLOW TEST:36.679 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:50:20.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 14:50:20.930: INFO: Number of nodes with available pods: 0
Feb  7 14:50:20.930: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:22.343: INFO: Number of nodes with available pods: 0
Feb  7 14:50:22.343: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:22.944: INFO: Number of nodes with available pods: 0
Feb  7 14:50:22.944: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:23.947: INFO: Number of nodes with available pods: 0
Feb  7 14:50:23.947: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:24.957: INFO: Number of nodes with available pods: 0
Feb  7 14:50:24.957: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:27.505: INFO: Number of nodes with available pods: 0
Feb  7 14:50:27.505: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:27.939: INFO: Number of nodes with available pods: 0
Feb  7 14:50:27.939: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:28.954: INFO: Number of nodes with available pods: 0
Feb  7 14:50:28.954: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:29.951: INFO: Number of nodes with available pods: 1
Feb  7 14:50:29.951: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:30.942: INFO: Number of nodes with available pods: 1
Feb  7 14:50:30.942: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:31.941: INFO: Number of nodes with available pods: 2
Feb  7 14:50:31.941: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  7 14:50:31.986: INFO: Number of nodes with available pods: 1
Feb  7 14:50:31.986: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:32.995: INFO: Number of nodes with available pods: 1
Feb  7 14:50:32.995: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:34.022: INFO: Number of nodes with available pods: 1
Feb  7 14:50:34.022: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:35.012: INFO: Number of nodes with available pods: 1
Feb  7 14:50:35.012: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:35.999: INFO: Number of nodes with available pods: 1
Feb  7 14:50:35.999: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:37.004: INFO: Number of nodes with available pods: 1
Feb  7 14:50:37.004: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:37.997: INFO: Number of nodes with available pods: 1
Feb  7 14:50:37.997: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:39.008: INFO: Number of nodes with available pods: 1
Feb  7 14:50:39.008: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:39.996: INFO: Number of nodes with available pods: 1
Feb  7 14:50:39.996: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:41.001: INFO: Number of nodes with available pods: 1
Feb  7 14:50:41.001: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:42.000: INFO: Number of nodes with available pods: 1
Feb  7 14:50:42.000: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:43.007: INFO: Number of nodes with available pods: 1
Feb  7 14:50:43.007: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:44.005: INFO: Number of nodes with available pods: 1
Feb  7 14:50:44.005: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:45.009: INFO: Number of nodes with available pods: 1
Feb  7 14:50:45.009: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:46.006: INFO: Number of nodes with available pods: 1
Feb  7 14:50:46.006: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:47.022: INFO: Number of nodes with available pods: 1
Feb  7 14:50:47.022: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:47.995: INFO: Number of nodes with available pods: 1
Feb  7 14:50:47.995: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:49.001: INFO: Number of nodes with available pods: 1
Feb  7 14:50:49.001: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:50.471: INFO: Number of nodes with available pods: 1
Feb  7 14:50:50.471: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:51.048: INFO: Number of nodes with available pods: 1
Feb  7 14:50:51.048: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:52.057: INFO: Number of nodes with available pods: 1
Feb  7 14:50:52.057: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:52.998: INFO: Number of nodes with available pods: 1
Feb  7 14:50:52.998: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:54.031: INFO: Number of nodes with available pods: 1
Feb  7 14:50:54.031: INFO: Node iruya-node is running more than one daemon pod
Feb  7 14:50:55.014: INFO: Number of nodes with available pods: 2
Feb  7 14:50:55.014: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3092, will wait for the garbage collector to delete the pods
Feb  7 14:50:55.085: INFO: Deleting DaemonSet.extensions daemon-set took: 14.390925ms
Feb  7 14:50:55.385: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.458876ms
Feb  7 14:51:07.901: INFO: Number of nodes with available pods: 0
Feb  7 14:51:07.901: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 14:51:07.908: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3092/daemonsets","resourceVersion":"23458864"},"items":null}

Feb  7 14:51:07.912: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3092/pods","resourceVersion":"23458864"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:51:07.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3092" for this suite.
Feb  7 14:51:14.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:51:14.089: INFO: namespace daemonsets-3092 deletion completed in 6.159608445s

• [SLOW TEST:53.330 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:51:14.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0207 14:51:15.232237       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 14:51:15.232: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:51:15.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6270" for this suite.
Feb  7 14:51:21.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:51:21.443: INFO: namespace gc-6270 deletion completed in 6.206205584s

• [SLOW TEST:7.354 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:51:21.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  7 14:51:22.092: INFO: created pod pod-service-account-defaultsa
Feb  7 14:51:22.092: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  7 14:51:22.103: INFO: created pod pod-service-account-mountsa
Feb  7 14:51:22.103: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  7 14:51:22.136: INFO: created pod pod-service-account-nomountsa
Feb  7 14:51:22.136: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  7 14:51:22.226: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  7 14:51:22.227: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  7 14:51:22.244: INFO: created pod pod-service-account-mountsa-mountspec
Feb  7 14:51:22.244: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  7 14:51:22.373: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  7 14:51:22.373: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  7 14:51:22.405: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  7 14:51:22.405: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  7 14:51:22.577: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  7 14:51:22.577: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  7 14:51:23.537: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  7 14:51:23.537: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:51:23.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9676" for this suite.
Feb  7 14:51:53.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:51:53.667: INFO: namespace svcaccounts-9676 deletion completed in 30.12222607s

• [SLOW TEST:32.224 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:51:53.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:51:53.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7" in namespace "projected-578" to be "success or failure"
Feb  7 14:51:53.881: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.817822ms
Feb  7 14:51:55.890: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031473955s
Feb  7 14:51:57.897: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038391166s
Feb  7 14:51:59.911: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052195587s
Feb  7 14:52:01.923: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064774505s
Feb  7 14:52:03.993: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134480307s
STEP: Saw pod success
Feb  7 14:52:03.993: INFO: Pod "downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7" satisfied condition "success or failure"
Feb  7 14:52:03.997: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7 container client-container: 
STEP: delete the pod
Feb  7 14:52:04.381: INFO: Waiting for pod downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7 to disappear
Feb  7 14:52:04.392: INFO: Pod downwardapi-volume-0f1d17fd-df0f-4f3b-95db-5b16f11637e7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:52:04.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-578" for this suite.
Feb  7 14:52:10.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:52:10.524: INFO: namespace projected-578 deletion completed in 6.125099789s

• [SLOW TEST:16.856 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:52:10.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  7 14:52:10.710: INFO: Waiting up to 5m0s for pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844" in namespace "downward-api-188" to be "success or failure"
Feb  7 14:52:10.726: INFO: Pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844": Phase="Pending", Reason="", readiness=false. Elapsed: 15.354874ms
Feb  7 14:52:13.185: INFO: Pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474767444s
Feb  7 14:52:15.191: INFO: Pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48077368s
Feb  7 14:52:17.201: INFO: Pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49065318s
Feb  7 14:52:19.213: INFO: Pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.502288034s
STEP: Saw pod success
Feb  7 14:52:19.213: INFO: Pod "downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844" satisfied condition "success or failure"
Feb  7 14:52:19.218: INFO: Trying to get logs from node iruya-node pod downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844 container dapi-container: 
STEP: delete the pod
Feb  7 14:52:19.329: INFO: Waiting for pod downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844 to disappear
Feb  7 14:52:19.336: INFO: Pod downward-api-8ffd47a3-063e-4742-a99b-528d36e1a844 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:52:19.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-188" for this suite.
Feb  7 14:52:25.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:52:25.570: INFO: namespace downward-api-188 deletion completed in 6.230032947s

• [SLOW TEST:15.046 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:52:25.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:52:25.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438" in namespace "downward-api-6715" to be "success or failure"
Feb  7 14:52:25.711: INFO: Pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438": Phase="Pending", Reason="", readiness=false. Elapsed: 61.416151ms
Feb  7 14:52:27.720: INFO: Pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069653425s
Feb  7 14:52:29.741: INFO: Pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090951647s
Feb  7 14:52:31.755: INFO: Pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104730101s
Feb  7 14:52:33.767: INFO: Pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117577925s
STEP: Saw pod success
Feb  7 14:52:33.767: INFO: Pod "downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438" satisfied condition "success or failure"
Feb  7 14:52:33.786: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438 container client-container: 
STEP: delete the pod
Feb  7 14:52:33.913: INFO: Waiting for pod downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438 to disappear
Feb  7 14:52:33.928: INFO: Pod downwardapi-volume-847ef552-d035-40b9-927f-9e650ca0b438 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:52:33.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6715" for this suite.
Feb  7 14:52:40.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:52:40.156: INFO: namespace downward-api-6715 deletion completed in 6.210296514s

• [SLOW TEST:14.585 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:52:40.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:52:45.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1764" for this suite.
Feb  7 14:52:51.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:52:51.973: INFO: namespace watch-1764 deletion completed in 6.210949972s

• [SLOW TEST:11.818 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:52:51.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  7 14:53:02.747: INFO: Successfully updated pod "labelsupdate8d3994b1-2c31-4b10-b59c-97710caf447f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:53:04.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6029" for this suite.
Feb  7 14:53:42.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:53:42.950: INFO: namespace projected-6029 deletion completed in 38.118447663s

• [SLOW TEST:50.977 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:53:42.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:53:51.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3962" for this suite.
Feb  7 14:54:39.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:54:39.380: INFO: namespace kubelet-test-3962 deletion completed in 48.163941468s

• [SLOW TEST:56.429 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:54:39.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  7 14:54:39.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1623'
Feb  7 14:54:41.576: INFO: stderr: ""
Feb  7 14:54:41.576: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 14:54:41.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:54:41.724: INFO: stderr: ""
Feb  7 14:54:41.724: INFO: stdout: "update-demo-nautilus-dbsfm update-demo-nautilus-lgh8l "
Feb  7 14:54:41.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbsfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:41.841: INFO: stderr: ""
Feb  7 14:54:41.841: INFO: stdout: ""
Feb  7 14:54:41.841: INFO: update-demo-nautilus-dbsfm is created but not running
Feb  7 14:54:46.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:54:46.993: INFO: stderr: ""
Feb  7 14:54:46.993: INFO: stdout: "update-demo-nautilus-dbsfm update-demo-nautilus-lgh8l "
Feb  7 14:54:46.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbsfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:47.126: INFO: stderr: ""
Feb  7 14:54:47.126: INFO: stdout: ""
Feb  7 14:54:47.126: INFO: update-demo-nautilus-dbsfm is created but not running
Feb  7 14:54:52.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:54:52.300: INFO: stderr: ""
Feb  7 14:54:52.300: INFO: stdout: "update-demo-nautilus-dbsfm update-demo-nautilus-lgh8l "
Feb  7 14:54:52.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbsfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:52.475: INFO: stderr: ""
Feb  7 14:54:52.475: INFO: stdout: ""
Feb  7 14:54:52.475: INFO: update-demo-nautilus-dbsfm is created but not running
Feb  7 14:54:57.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:54:57.635: INFO: stderr: ""
Feb  7 14:54:57.635: INFO: stdout: "update-demo-nautilus-dbsfm update-demo-nautilus-lgh8l "
Feb  7 14:54:57.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbsfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:57.818: INFO: stderr: ""
Feb  7 14:54:57.818: INFO: stdout: "true"
Feb  7 14:54:57.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbsfm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:57.903: INFO: stderr: ""
Feb  7 14:54:57.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:54:57.903: INFO: validating pod update-demo-nautilus-dbsfm
Feb  7 14:54:57.915: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:54:57.915: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:54:57.915: INFO: update-demo-nautilus-dbsfm is verified up and running
Feb  7 14:54:57.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgh8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:58.026: INFO: stderr: ""
Feb  7 14:54:58.026: INFO: stdout: "true"
Feb  7 14:54:58.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgh8l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:54:58.133: INFO: stderr: ""
Feb  7 14:54:58.133: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:54:58.133: INFO: validating pod update-demo-nautilus-lgh8l
Feb  7 14:54:58.167: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:54:58.167: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:54:58.167: INFO: update-demo-nautilus-lgh8l is verified up and running
STEP: scaling down the replication controller
Feb  7 14:54:58.171: INFO: scanned /root for discovery docs: 
Feb  7 14:54:58.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1623'
Feb  7 14:54:59.347: INFO: stderr: ""
Feb  7 14:54:59.347: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 14:54:59.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:54:59.530: INFO: stderr: ""
Feb  7 14:54:59.530: INFO: stdout: "update-demo-nautilus-dbsfm update-demo-nautilus-lgh8l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  7 14:55:04.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:55:04.634: INFO: stderr: ""
Feb  7 14:55:04.634: INFO: stdout: "update-demo-nautilus-dbsfm update-demo-nautilus-lgh8l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  7 14:55:09.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:55:09.823: INFO: stderr: ""
Feb  7 14:55:09.823: INFO: stdout: "update-demo-nautilus-lgh8l "
Feb  7 14:55:09.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgh8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:09.938: INFO: stderr: ""
Feb  7 14:55:09.938: INFO: stdout: "true"
Feb  7 14:55:09.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgh8l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:10.055: INFO: stderr: ""
Feb  7 14:55:10.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:55:10.055: INFO: validating pod update-demo-nautilus-lgh8l
Feb  7 14:55:10.067: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:55:10.067: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:55:10.067: INFO: update-demo-nautilus-lgh8l is verified up and running
STEP: scaling up the replication controller
Feb  7 14:55:10.069: INFO: scanned /root for discovery docs: 
Feb  7 14:55:10.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1623'
Feb  7 14:55:11.361: INFO: stderr: ""
Feb  7 14:55:11.361: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 14:55:11.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:55:11.520: INFO: stderr: ""
Feb  7 14:55:11.520: INFO: stdout: "update-demo-nautilus-95czb update-demo-nautilus-lgh8l "
Feb  7 14:55:11.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95czb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:11.601: INFO: stderr: ""
Feb  7 14:55:11.601: INFO: stdout: ""
Feb  7 14:55:11.601: INFO: update-demo-nautilus-95czb is created but not running
Feb  7 14:55:16.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:55:16.702: INFO: stderr: ""
Feb  7 14:55:16.702: INFO: stdout: "update-demo-nautilus-95czb update-demo-nautilus-lgh8l "
Feb  7 14:55:16.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95czb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:16.777: INFO: stderr: ""
Feb  7 14:55:16.777: INFO: stdout: ""
Feb  7 14:55:16.777: INFO: update-demo-nautilus-95czb is created but not running
Feb  7 14:55:21.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1623'
Feb  7 14:55:21.939: INFO: stderr: ""
Feb  7 14:55:21.939: INFO: stdout: "update-demo-nautilus-95czb update-demo-nautilus-lgh8l "
Feb  7 14:55:21.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95czb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:22.043: INFO: stderr: ""
Feb  7 14:55:22.043: INFO: stdout: "true"
Feb  7 14:55:22.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95czb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:22.195: INFO: stderr: ""
Feb  7 14:55:22.195: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:55:22.195: INFO: validating pod update-demo-nautilus-95czb
Feb  7 14:55:22.211: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:55:22.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:55:22.211: INFO: update-demo-nautilus-95czb is verified up and running
Feb  7 14:55:22.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgh8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:22.333: INFO: stderr: ""
Feb  7 14:55:22.333: INFO: stdout: "true"
Feb  7 14:55:22.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgh8l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1623'
Feb  7 14:55:22.419: INFO: stderr: ""
Feb  7 14:55:22.419: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 14:55:22.419: INFO: validating pod update-demo-nautilus-lgh8l
Feb  7 14:55:22.434: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 14:55:22.434: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 14:55:22.434: INFO: update-demo-nautilus-lgh8l is verified up and running
STEP: using delete to clean up resources
Feb  7 14:55:22.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1623'
Feb  7 14:55:22.528: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 14:55:22.528: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  7 14:55:22.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1623'
Feb  7 14:55:22.974: INFO: stderr: "No resources found.\n"
Feb  7 14:55:22.974: INFO: stdout: ""
Feb  7 14:55:22.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1623 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 14:55:23.089: INFO: stderr: ""
Feb  7 14:55:23.089: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:55:23.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1623" for this suite.
Feb  7 14:55:46.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:55:46.191: INFO: namespace kubectl-1623 deletion completed in 23.08789979s

• [SLOW TEST:66.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:55:46.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  7 14:55:56.929: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d8280022-bd13-4cdf-9455-ffd9eb1f2201"
Feb  7 14:55:56.929: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d8280022-bd13-4cdf-9455-ffd9eb1f2201" in namespace "pods-2531" to be "terminated due to deadline exceeded"
Feb  7 14:55:57.255: INFO: Pod "pod-update-activedeadlineseconds-d8280022-bd13-4cdf-9455-ffd9eb1f2201": Phase="Running", Reason="", readiness=true. Elapsed: 325.669906ms
Feb  7 14:55:59.262: INFO: Pod "pod-update-activedeadlineseconds-d8280022-bd13-4cdf-9455-ffd9eb1f2201": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.333164702s
Feb  7 14:55:59.262: INFO: Pod "pod-update-activedeadlineseconds-d8280022-bd13-4cdf-9455-ffd9eb1f2201" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:55:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2531" for this suite.
Feb  7 14:56:05.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:56:05.501: INFO: namespace pods-2531 deletion completed in 6.228074069s

• [SLOW TEST:19.310 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:56:05.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 14:56:05.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179" in namespace "downward-api-4708" to be "success or failure"
Feb  7 14:56:05.659: INFO: Pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180199ms
Feb  7 14:56:07.714: INFO: Pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061336717s
Feb  7 14:56:09.722: INFO: Pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06969784s
Feb  7 14:56:11.728: INFO: Pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075542011s
Feb  7 14:56:13.747: INFO: Pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094413496s
STEP: Saw pod success
Feb  7 14:56:13.747: INFO: Pod "downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179" satisfied condition "success or failure"
Feb  7 14:56:13.752: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179 container client-container: 
STEP: delete the pod
Feb  7 14:56:13.984: INFO: Waiting for pod downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179 to disappear
Feb  7 14:56:14.024: INFO: Pod downwardapi-volume-08eea66a-2f87-4da8-8273-72c1b5f30179 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:56:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4708" for this suite.
Feb  7 14:56:20.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:56:20.289: INFO: namespace downward-api-4708 deletion completed in 6.259577372s

• [SLOW TEST:14.788 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:56:20.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  7 14:56:36.617: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 14:56:36.720: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 14:56:38.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 14:56:38.732: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 14:56:40.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 14:56:40.732: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 14:56:42.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 14:56:42.729: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 14:56:44.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 14:56:44.731: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 14:56:46.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 14:56:46.725: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:56:46.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-14" for this suite.
Feb  7 14:57:08.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:57:08.961: INFO: namespace container-lifecycle-hook-14 deletion completed in 22.212571836s

• [SLOW TEST:48.671 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:57:08.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6507/configmap-test-9b18159d-b7c6-49ff-a7ea-5815c79890b9
STEP: Creating a pod to test consume configMaps
Feb  7 14:57:09.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d" in namespace "configmap-6507" to be "success or failure"
Feb  7 14:57:09.101: INFO: Pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355974ms
Feb  7 14:57:11.108: INFO: Pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01414899s
Feb  7 14:57:13.117: INFO: Pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022822943s
Feb  7 14:57:15.126: INFO: Pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032208537s
Feb  7 14:57:17.136: INFO: Pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042325875s
STEP: Saw pod success
Feb  7 14:57:17.136: INFO: Pod "pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d" satisfied condition "success or failure"
Feb  7 14:57:17.139: INFO: Trying to get logs from node iruya-node pod pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d container env-test: 
STEP: delete the pod
Feb  7 14:57:17.206: INFO: Waiting for pod pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d to disappear
Feb  7 14:57:17.214: INFO: Pod pod-configmaps-251334cf-3591-4083-80ab-cb8189b6c16d no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 14:57:17.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6507" for this suite.
Feb  7 14:57:23.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 14:57:23.386: INFO: namespace configmap-6507 deletion completed in 6.16707591s

• [SLOW TEST:14.425 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 14:57:23.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4832
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  7 14:57:23.547: INFO: Found 0 stateful pods, waiting for 3
Feb  7 14:57:33.561: INFO: Found 2 stateful pods, waiting for 3
Feb  7 14:57:43.555: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:57:43.555: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:57:43.555: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 14:57:53.556: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:57:53.556: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:57:53.556: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 14:57:53.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4832 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 14:57:54.225: INFO: stderr: "I0207 14:57:53.833020    3571 log.go:172] (0xc0009e8420) (0xc000a526e0) Create stream\nI0207 14:57:53.833208    3571 log.go:172] (0xc0009e8420) (0xc000a526e0) Stream added, broadcasting: 1\nI0207 14:57:53.837554    3571 log.go:172] (0xc0009e8420) Reply frame received for 1\nI0207 14:57:53.837649    3571 log.go:172] (0xc0009e8420) (0xc0006620a0) Create stream\nI0207 14:57:53.837670    3571 log.go:172] (0xc0009e8420) (0xc0006620a0) Stream added, broadcasting: 3\nI0207 14:57:53.838669    3571 log.go:172] (0xc0009e8420) Reply frame received for 3\nI0207 14:57:53.838690    3571 log.go:172] (0xc0009e8420) (0xc000a52780) Create stream\nI0207 14:57:53.838697    3571 log.go:172] (0xc0009e8420) (0xc000a52780) Stream added, broadcasting: 5\nI0207 14:57:53.840778    3571 log.go:172] (0xc0009e8420) Reply frame received for 5\nI0207 14:57:54.050500    3571 log.go:172] (0xc0009e8420) Data frame received for 5\nI0207 14:57:54.050577    3571 log.go:172] (0xc000a52780) (5) Data frame handling\nI0207 14:57:54.050595    3571 log.go:172] (0xc000a52780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 14:57:54.111161    3571 log.go:172] (0xc0009e8420) Data frame received for 3\nI0207 14:57:54.111214    3571 log.go:172] (0xc0006620a0) (3) Data frame handling\nI0207 14:57:54.111239    3571 log.go:172] (0xc0006620a0) (3) Data frame sent\nI0207 14:57:54.212575    3571 log.go:172] (0xc0009e8420) Data frame received for 1\nI0207 14:57:54.212729    3571 log.go:172] (0xc000a526e0) (1) Data frame handling\nI0207 14:57:54.212750    3571 log.go:172] (0xc000a526e0) (1) Data frame sent\nI0207 14:57:54.212837    3571 log.go:172] (0xc0009e8420) (0xc000a526e0) Stream removed, broadcasting: 1\nI0207 14:57:54.214049    3571 log.go:172] (0xc0009e8420) (0xc0006620a0) Stream removed, broadcasting: 3\nI0207 14:57:54.214367    3571 log.go:172] (0xc0009e8420) (0xc000a52780) Stream removed, broadcasting: 5\nI0207 14:57:54.214464    3571 log.go:172] (0xc0009e8420) Go away received\nI0207 14:57:54.214499    3571 log.go:172] (0xc0009e8420) (0xc000a526e0) Stream removed, broadcasting: 1\nI0207 14:57:54.214529    3571 log.go:172] (0xc0009e8420) (0xc0006620a0) Stream removed, broadcasting: 3\nI0207 14:57:54.214538    3571 log.go:172] (0xc0009e8420) (0xc000a52780) Stream removed, broadcasting: 5\n"
Feb  7 14:57:54.225: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 14:57:54.225: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  7 14:58:04.278: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  7 14:58:14.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4832 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:58:14.862: INFO: stderr: "I0207 14:58:14.608292    3591 log.go:172] (0xc0009c8420) (0xc000994780) Create stream\nI0207 14:58:14.608568    3591 log.go:172] (0xc0009c8420) (0xc000994780) Stream added, broadcasting: 1\nI0207 14:58:14.612430    3591 log.go:172] (0xc0009c8420) Reply frame received for 1\nI0207 14:58:14.612460    3591 log.go:172] (0xc0009c8420) (0xc000994000) Create stream\nI0207 14:58:14.612467    3591 log.go:172] (0xc0009c8420) (0xc000994000) Stream added, broadcasting: 3\nI0207 14:58:14.613300    3591 log.go:172] (0xc0009c8420) Reply frame received for 3\nI0207 14:58:14.613317    3591 log.go:172] (0xc0009c8420) (0xc000966000) Create stream\nI0207 14:58:14.613323    3591 log.go:172] (0xc0009c8420) (0xc000966000) Stream added, broadcasting: 5\nI0207 14:58:14.614195    3591 log.go:172] (0xc0009c8420) Reply frame received for 5\nI0207 14:58:14.683489    3591 log.go:172] (0xc0009c8420) Data frame received for 5\nI0207 14:58:14.683617    3591 log.go:172] (0xc000966000) (5) Data frame handling\nI0207 14:58:14.683640    3591 log.go:172] (0xc000966000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 14:58:14.683654    3591 log.go:172] (0xc0009c8420) Data frame received for 3\nI0207 14:58:14.683660    3591 log.go:172] (0xc000994000) (3) Data frame handling\nI0207 14:58:14.683667    3591 log.go:172] (0xc000994000) (3) Data frame sent\nI0207 14:58:14.857220    3591 log.go:172] (0xc0009c8420) (0xc000994000) Stream removed, broadcasting: 3\nI0207 14:58:14.857481    3591 log.go:172] (0xc0009c8420) Data frame received for 1\nI0207 14:58:14.857534    3591 log.go:172] (0xc000994780) (1) Data frame handling\nI0207 14:58:14.857572    3591 log.go:172] (0xc000994780) (1) Data frame sent\nI0207 14:58:14.857625    3591 log.go:172] (0xc0009c8420) (0xc000994780) Stream removed, broadcasting: 1\nI0207 14:58:14.858002    3591 log.go:172] (0xc0009c8420) (0xc000966000) Stream removed, broadcasting: 5\nI0207 14:58:14.858065    3591 log.go:172] (0xc0009c8420) (0xc000994780) Stream removed, broadcasting: 1\nI0207 14:58:14.858097    3591 log.go:172] (0xc0009c8420) (0xc000994000) Stream removed, broadcasting: 3\nI0207 14:58:14.858121    3591 log.go:172] (0xc0009c8420) (0xc000966000) Stream removed, broadcasting: 5\n"
Feb  7 14:58:14.862: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 14:58:14.862: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 14:58:14.902: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:58:14.902: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:14.902: INFO: Waiting for Pod statefulset-4832/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:14.902: INFO: Waiting for Pod statefulset-4832/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:24.917: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:58:24.917: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:24.917: INFO: Waiting for Pod statefulset-4832/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:24.917: INFO: Waiting for Pod statefulset-4832/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:34.915: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:58:34.915: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:34.915: INFO: Waiting for Pod statefulset-4832/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:45.270: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:58:45.271: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 14:58:54.955: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  7 14:59:04.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4832 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 14:59:05.435: INFO: stderr: "I0207 14:59:05.152012    3609 log.go:172] (0xc0009840b0) (0xc0009de640) Create stream\nI0207 14:59:05.152120    3609 log.go:172] (0xc0009840b0) (0xc0009de640) Stream added, broadcasting: 1\nI0207 14:59:05.154448    3609 log.go:172] (0xc0009840b0) Reply frame received for 1\nI0207 14:59:05.154478    3609 log.go:172] (0xc0009840b0) (0xc000926000) Create stream\nI0207 14:59:05.154486    3609 log.go:172] (0xc0009840b0) (0xc000926000) Stream added, broadcasting: 3\nI0207 14:59:05.155562    3609 log.go:172] (0xc0009840b0) Reply frame received for 3\nI0207 14:59:05.155586    3609 log.go:172] (0xc0009840b0) (0xc0005123c0) Create stream\nI0207 14:59:05.155601    3609 log.go:172] (0xc0009840b0) (0xc0005123c0) Stream added, broadcasting: 5\nI0207 14:59:05.156541    3609 log.go:172] (0xc0009840b0) Reply frame received for 5\nI0207 14:59:05.243952    3609 log.go:172] (0xc0009840b0) Data frame received for 5\nI0207 14:59:05.243988    3609 log.go:172] (0xc0005123c0) (5) Data frame handling\nI0207 14:59:05.244010    3609 log.go:172] (0xc0005123c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0207 14:59:05.278597    3609 log.go:172] (0xc0009840b0) Data frame received for 3\nI0207 14:59:05.278630    3609 log.go:172] (0xc000926000) (3) Data frame handling\nI0207 14:59:05.278644    3609 log.go:172] (0xc000926000) (3) Data frame sent\nI0207 14:59:05.429018    3609 log.go:172] (0xc0009840b0) (0xc000926000) Stream removed, broadcasting: 3\nI0207 14:59:05.429093    3609 log.go:172] (0xc0009840b0) Data frame received for 1\nI0207 14:59:05.429108    3609 log.go:172] (0xc0009de640) (1) Data frame handling\nI0207 14:59:05.429123    3609 log.go:172] (0xc0009de640) (1) Data frame sent\nI0207 14:59:05.429132    3609 log.go:172] (0xc0009840b0) (0xc0009de640) Stream removed, broadcasting: 1\nI0207 14:59:05.429145    3609 log.go:172] (0xc0009840b0) (0xc0005123c0) Stream removed, broadcasting: 5\nI0207 14:59:05.429203    3609 log.go:172] (0xc0009840b0) Go away received\nI0207 14:59:05.429557    3609 log.go:172] (0xc0009840b0) (0xc0009de640) Stream removed, broadcasting: 1\nI0207 14:59:05.429574    3609 log.go:172] (0xc0009840b0) (0xc000926000) Stream removed, broadcasting: 3\nI0207 14:59:05.429583    3609 log.go:172] (0xc0009840b0) (0xc0005123c0) Stream removed, broadcasting: 5\n"
Feb  7 14:59:05.435: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 14:59:05.435: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 14:59:05.556: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  7 14:59:15.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4832 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 14:59:16.078: INFO: stderr: "I0207 14:59:15.862343    3629 log.go:172] (0xc000a6e420) (0xc000880640) Create stream\nI0207 14:59:15.862629    3629 log.go:172] (0xc000a6e420) (0xc000880640) Stream added, broadcasting: 1\nI0207 14:59:15.870717    3629 log.go:172] (0xc000a6e420) Reply frame received for 1\nI0207 14:59:15.870810    3629 log.go:172] (0xc000a6e420) (0xc000986000) Create stream\nI0207 14:59:15.870826    3629 log.go:172] (0xc000a6e420) (0xc000986000) Stream added, broadcasting: 3\nI0207 14:59:15.872529    3629 log.go:172] (0xc000a6e420) Reply frame received for 3\nI0207 14:59:15.872619    3629 log.go:172] (0xc000a6e420) (0xc000a76000) Create stream\nI0207 14:59:15.872654    3629 log.go:172] (0xc000a6e420) (0xc000a76000) Stream added, broadcasting: 5\nI0207 14:59:15.874691    3629 log.go:172] (0xc000a6e420) Reply frame received for 5\nI0207 14:59:15.980174    3629 log.go:172] (0xc000a6e420) Data frame received for 5\nI0207 14:59:15.980320    3629 log.go:172] (0xc000a76000) (5) Data frame handling\nI0207 14:59:15.980358    3629 log.go:172] (0xc000a76000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0207 14:59:15.980408    3629 log.go:172] (0xc000a6e420) Data frame received for 3\nI0207 14:59:15.980417    3629 log.go:172] (0xc000986000) (3) Data frame handling\nI0207 14:59:15.980437    3629 log.go:172] (0xc000986000) (3) Data frame sent\nI0207 14:59:16.070693    3629 log.go:172] (0xc000a6e420) Data frame received for 1\nI0207 14:59:16.070765    3629 log.go:172] (0xc000880640) (1) Data frame handling\nI0207 14:59:16.070784    3629 log.go:172] (0xc000880640) (1) Data frame sent\nI0207 14:59:16.071100    3629 log.go:172] (0xc000a6e420) (0xc000880640) Stream removed, broadcasting: 1\nI0207 14:59:16.071666    3629 log.go:172] (0xc000a6e420) (0xc000a76000) Stream removed, broadcasting: 5\nI0207 14:59:16.071740    3629 log.go:172] (0xc000a6e420) (0xc000986000) Stream removed, broadcasting: 3\nI0207 14:59:16.071771    3629 log.go:172] (0xc000a6e420) (0xc000880640) Stream removed, broadcasting: 1\nI0207 14:59:16.071784    3629 log.go:172] (0xc000a6e420) (0xc000986000) Stream removed, broadcasting: 3\nI0207 14:59:16.071791    3629 log.go:172] (0xc000a6e420) (0xc000a76000) Stream removed, broadcasting: 5\n"
Feb  7 14:59:16.078: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 14:59:16.078: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 14:59:26.113: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:59:26.113: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 14:59:26.113: INFO: Waiting for Pod statefulset-4832/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 14:59:36.129: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:59:36.129: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 14:59:36.129: INFO: Waiting for Pod statefulset-4832/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 14:59:46.123: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:59:46.123: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 14:59:56.129: INFO: Waiting for StatefulSet statefulset-4832/ss2 to complete update
Feb  7 14:59:56.130: INFO: Waiting for Pod statefulset-4832/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  7 15:00:06.124: INFO: Deleting all statefulset in ns statefulset-4832
Feb  7 15:00:06.128: INFO: Scaling statefulset ss2 to 0
Feb  7 15:00:36.163: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 15:00:36.169: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:00:36.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4832" for this suite.
Feb  7 15:00:44.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:00:44.400: INFO: namespace statefulset-4832 deletion completed in 8.155845953s

• [SLOW TEST:201.014 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:00:44.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb  7 15:00:44.515: INFO: Waiting up to 5m0s for pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f" in namespace "var-expansion-3594" to be "success or failure"
Feb  7 15:00:44.524: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541759ms
Feb  7 15:00:46.538: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022931521s
Feb  7 15:00:48.557: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041497541s
Feb  7 15:00:50.565: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050114971s
Feb  7 15:00:52.643: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127219828s
Feb  7 15:00:54.650: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134584491s
STEP: Saw pod success
Feb  7 15:00:54.650: INFO: Pod "var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f" satisfied condition "success or failure"
Feb  7 15:00:54.655: INFO: Trying to get logs from node iruya-node pod var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f container dapi-container: 
STEP: delete the pod
Feb  7 15:00:54.785: INFO: Waiting for pod var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f to disappear
Feb  7 15:00:54.847: INFO: Pod var-expansion-2de57433-ac86-4040-890d-15cb5b1cac4f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:00:54.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3594" for this suite.
Feb  7 15:01:01.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:01:01.378: INFO: namespace var-expansion-3594 deletion completed in 6.177582538s

• [SLOW TEST:16.978 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:01:01.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb  7 15:01:01.478: INFO: Waiting up to 5m0s for pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d" in namespace "containers-6909" to be "success or failure"
Feb  7 15:01:01.503: INFO: Pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.694375ms
Feb  7 15:01:03.511: INFO: Pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03286893s
Feb  7 15:01:05.521: INFO: Pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042833639s
Feb  7 15:01:07.530: INFO: Pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051397324s
Feb  7 15:01:09.539: INFO: Pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060618689s
STEP: Saw pod success
Feb  7 15:01:09.539: INFO: Pod "client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d" satisfied condition "success or failure"
Feb  7 15:01:09.574: INFO: Trying to get logs from node iruya-node pod client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d container test-container: 
STEP: delete the pod
Feb  7 15:01:09.664: INFO: Waiting for pod client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d to disappear
Feb  7 15:01:09.703: INFO: Pod client-containers-abe3ba2f-d98c-4e33-9d43-3c4547e48a9d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:01:09.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6909" for this suite.
Feb  7 15:01:15.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:01:15.907: INFO: namespace containers-6909 deletion completed in 6.194501409s

• [SLOW TEST:14.529 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:01:15.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-q7k2
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 15:01:16.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q7k2" in namespace "subpath-266" to be "success or failure"
Feb  7 15:01:16.083: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155775ms
Feb  7 15:01:18.090: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013175528s
Feb  7 15:01:20.095: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018256047s
Feb  7 15:01:22.104: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026633977s
Feb  7 15:01:24.114: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 8.037292014s
Feb  7 15:01:26.282: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 10.204708674s
Feb  7 15:01:28.292: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 12.214428491s
Feb  7 15:01:30.298: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 14.221163327s
Feb  7 15:01:32.306: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 16.228497623s
Feb  7 15:01:34.328: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 18.250645389s
Feb  7 15:01:36.335: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 20.258068625s
Feb  7 15:01:38.343: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 22.265750519s
Feb  7 15:01:40.356: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 24.279267378s
Feb  7 15:01:42.373: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 26.295497242s
Feb  7 15:01:44.388: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Running", Reason="", readiness=true. Elapsed: 28.3104161s
Feb  7 15:01:46.398: INFO: Pod "pod-subpath-test-configmap-q7k2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.3204949s
STEP: Saw pod success
Feb  7 15:01:46.398: INFO: Pod "pod-subpath-test-configmap-q7k2" satisfied condition "success or failure"
Feb  7 15:01:46.403: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-q7k2 container test-container-subpath-configmap-q7k2: 
STEP: delete the pod
Feb  7 15:01:46.483: INFO: Waiting for pod pod-subpath-test-configmap-q7k2 to disappear
Feb  7 15:01:46.489: INFO: Pod pod-subpath-test-configmap-q7k2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-q7k2
Feb  7 15:01:46.489: INFO: Deleting pod "pod-subpath-test-configmap-q7k2" in namespace "subpath-266"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:01:46.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-266" for this suite.
Feb  7 15:01:52.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:01:52.691: INFO: namespace subpath-266 deletion completed in 6.165439849s

• [SLOW TEST:36.782 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:01:52.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-50bfc2b7-005d-460b-8c6b-b45f8af4f5d0
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:01:52.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1175" for this suite.
Feb  7 15:01:58.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:01:59.063: INFO: namespace secrets-1175 deletion completed in 6.207231457s

• [SLOW TEST:6.371 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:01:59.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  7 15:01:59.194: INFO: Waiting up to 5m0s for pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9" in namespace "emptydir-811" to be "success or failure"
Feb  7 15:01:59.200: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.874433ms
Feb  7 15:02:01.207: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012867193s
Feb  7 15:02:04.060: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.866278343s
Feb  7 15:02:06.068: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.874226944s
Feb  7 15:02:08.075: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.881770919s
Feb  7 15:02:10.085: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.890889629s
STEP: Saw pod success
Feb  7 15:02:10.085: INFO: Pod "pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9" satisfied condition "success or failure"
Feb  7 15:02:10.090: INFO: Trying to get logs from node iruya-node pod pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9 container test-container: 
STEP: delete the pod
Feb  7 15:02:10.172: INFO: Waiting for pod pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9 to disappear
Feb  7 15:02:10.180: INFO: Pod pod-97ee126f-8376-4f29-a2a7-63b7017ed6b9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:02:10.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-811" for this suite.
Feb  7 15:02:16.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:02:16.357: INFO: namespace emptydir-811 deletion completed in 6.164671976s

• [SLOW TEST:17.294 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:02:16.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-5bf5f7b0-ebba-47bf-a8b5-4b5f120432bc
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-5bf5f7b0-ebba-47bf-a8b5-4b5f120432bc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:03:52.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-785" for this suite.
Feb  7 15:04:14.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:04:14.478: INFO: namespace projected-785 deletion completed in 22.237359686s

• [SLOW TEST:118.120 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:04:14.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  7 15:04:14.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a" in namespace "downward-api-6113" to be "success or failure"
Feb  7 15:04:14.599: INFO: Pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.715305ms
Feb  7 15:04:16.625: INFO: Pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034948917s
Feb  7 15:04:18.636: INFO: Pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045429252s
Feb  7 15:04:20.644: INFO: Pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053161696s
Feb  7 15:04:22.664: INFO: Pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073736547s
STEP: Saw pod success
Feb  7 15:04:22.664: INFO: Pod "downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a" satisfied condition "success or failure"
Feb  7 15:04:22.671: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a container client-container: 
STEP: delete the pod
Feb  7 15:04:22.745: INFO: Waiting for pod downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a to disappear
Feb  7 15:04:22.830: INFO: Pod downwardapi-volume-7bed249e-574e-457d-9696-6948bdcbb80a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:04:22.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6113" for this suite.
Feb  7 15:04:28.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:04:29.005: INFO: namespace downward-api-6113 deletion completed in 6.163396119s

• [SLOW TEST:14.527 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:04:29.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:04:38.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5194" for this suite.
Feb  7 15:05:00.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:05:00.441: INFO: namespace replication-controller-5194 deletion completed in 22.165212574s

• [SLOW TEST:31.435 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:05:00.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-f84d7837-650e-4ed4-9591-17e98212b900 in namespace container-probe-1938
Feb  7 15:05:08.999: INFO: Started pod busybox-f84d7837-650e-4ed4-9591-17e98212b900 in namespace container-probe-1938
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 15:05:09.008: INFO: Initial restart count of pod busybox-f84d7837-650e-4ed4-9591-17e98212b900 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:09:11.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1938" for this suite.
Feb  7 15:09:17.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:09:17.265: INFO: namespace container-probe-1938 deletion completed in 6.158721902s

• [SLOW TEST:256.824 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:09:17.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-09a13c6f-ae9c-47c3-b59a-8a2b3617a081
STEP: Creating configMap with name cm-test-opt-upd-6ceafd1f-4059-4d36-804f-979cb6bfa16b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-09a13c6f-ae9c-47c3-b59a-8a2b3617a081
STEP: Updating configmap cm-test-opt-upd-6ceafd1f-4059-4d36-804f-979cb6bfa16b
STEP: Creating configMap with name cm-test-opt-create-285446ea-a181-4710-9187-f5bfee3a838e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:09:33.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3550" for this suite.
Feb  7 15:09:57.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:09:58.004: INFO: namespace configmap-3550 deletion completed in 24.242861188s

• [SLOW TEST:40.739 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:09:58.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1eb0b490-13ad-497d-9a3b-14930057a2e5
STEP: Creating a pod to test consume configMaps
Feb  7 15:09:58.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74" in namespace "projected-3245" to be "success or failure"
Feb  7 15:09:58.193: INFO: Pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74": Phase="Pending", Reason="", readiness=false. Elapsed: 45.903364ms
Feb  7 15:10:00.205: INFO: Pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057745539s
Feb  7 15:10:02.214: INFO: Pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066955163s
Feb  7 15:10:04.221: INFO: Pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073567163s
Feb  7 15:10:06.252: INFO: Pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104785303s
STEP: Saw pod success
Feb  7 15:10:06.252: INFO: Pod "pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74" satisfied condition "success or failure"
Feb  7 15:10:06.299: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 15:10:06.387: INFO: Waiting for pod pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74 to disappear
Feb  7 15:10:06.391: INFO: Pod pod-projected-configmaps-d00cf611-1cad-4c07-82db-6efd5edafd74 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:10:06.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3245" for this suite.
Feb  7 15:10:12.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:10:12.592: INFO: namespace projected-3245 deletion completed in 6.196558414s

• [SLOW TEST:14.587 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:10:12.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  7 15:10:12.822: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b5d14a82-5e4c-4ac3-938c-6989114cc08f", Controller:(*bool)(0xc002692982), BlockOwnerDeletion:(*bool)(0xc002692983)}}
Feb  7 15:10:13.141: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a5fa8dee-4266-450c-935a-6a06051f803e", Controller:(*bool)(0xc0022ea622), BlockOwnerDeletion:(*bool)(0xc0022ea623)}}
Feb  7 15:10:13.154: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1c21f6bf-934d-484f-a002-db9110c027ec", Controller:(*bool)(0xc002c28fa2), BlockOwnerDeletion:(*bool)(0xc002c28fa3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:10:20.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5493" for this suite.
Feb  7 15:10:26.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:10:26.914: INFO: namespace gc-5493 deletion completed in 6.277414484s

• [SLOW TEST:14.322 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:10:26.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8873.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8873.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.96.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.96.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.96.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.96.170_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8873.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8873.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8873.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.96.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.96.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.96.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.96.170_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 15:10:39.210: INFO: Unable to read wheezy_udp@dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.217: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.223: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.227: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.233: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.238: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.242: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.247: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.250: INFO: Unable to read 10.100.96.170_udp@PTR from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.254: INFO: Unable to read 10.100.96.170_tcp@PTR from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.258: INFO: Unable to read jessie_udp@dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.265: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.270: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.274: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.278: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8873.svc.cluster.local from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.281: INFO: Unable to read jessie_udp@PodARecord from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.285: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.289: INFO: Unable to read 10.100.96.170_udp@PTR from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.300: INFO: Unable to read 10.100.96.170_tcp@PTR from pod dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0: the server could not find the requested resource (get pods dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0)
Feb  7 15:10:39.300: INFO: Lookups using dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0 failed for: [wheezy_udp@dns-test-service.dns-8873.svc.cluster.local wheezy_tcp@dns-test-service.dns-8873.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8873.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8873.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.100.96.170_udp@PTR 10.100.96.170_tcp@PTR jessie_udp@dns-test-service.dns-8873.svc.cluster.local jessie_tcp@dns-test-service.dns-8873.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8873.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8873.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8873.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.100.96.170_udp@PTR 10.100.96.170_tcp@PTR]

Feb  7 15:10:44.408: INFO: DNS probes using dns-8873/dns-test-c50c287a-122a-468f-99fe-cef522ac7ac0 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:10:44.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8873" for this suite.
Feb  7 15:10:52.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:10:52.791: INFO: namespace dns-8873 deletion completed in 8.14931592s

• [SLOW TEST:25.877 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:10:52.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2708
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 15:10:52.937: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 15:11:27.147: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2708 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 15:11:27.147: INFO: >>> kubeConfig: /root/.kube/config
I0207 15:11:27.239930       8 log.go:172] (0xc001a2a6e0) (0xc00214b0e0) Create stream
I0207 15:11:27.240040       8 log.go:172] (0xc001a2a6e0) (0xc00214b0e0) Stream added, broadcasting: 1
I0207 15:11:27.248872       8 log.go:172] (0xc001a2a6e0) Reply frame received for 1
I0207 15:11:27.248903       8 log.go:172] (0xc001a2a6e0) (0xc000fe0000) Create stream
I0207 15:11:27.248910       8 log.go:172] (0xc001a2a6e0) (0xc000fe0000) Stream added, broadcasting: 3
I0207 15:11:27.251846       8 log.go:172] (0xc001a2a6e0) Reply frame received for 3
I0207 15:11:27.251894       8 log.go:172] (0xc001a2a6e0) (0xc00231e000) Create stream
I0207 15:11:27.251910       8 log.go:172] (0xc001a2a6e0) (0xc00231e000) Stream added, broadcasting: 5
I0207 15:11:27.258149       8 log.go:172] (0xc001a2a6e0) Reply frame received for 5
I0207 15:11:27.446287       8 log.go:172] (0xc001a2a6e0) Data frame received for 3
I0207 15:11:27.446317       8 log.go:172] (0xc000fe0000) (3) Data frame handling
I0207 15:11:27.446335       8 log.go:172] (0xc000fe0000) (3) Data frame sent
I0207 15:11:27.589477       8 log.go:172] (0xc001a2a6e0) (0xc000fe0000) Stream removed, broadcasting: 3
I0207 15:11:27.589580       8 log.go:172] (0xc001a2a6e0) (0xc00231e000) Stream removed, broadcasting: 5
I0207 15:11:27.589625       8 log.go:172] (0xc001a2a6e0) Data frame received for 1
I0207 15:11:27.589658       8 log.go:172] (0xc00214b0e0) (1) Data frame handling
I0207 15:11:27.589677       8 log.go:172] (0xc00214b0e0) (1) Data frame sent
I0207 15:11:27.589695       8 log.go:172] (0xc001a2a6e0) (0xc00214b0e0) Stream removed, broadcasting: 1
I0207 15:11:27.589799       8 log.go:172] (0xc001a2a6e0) (0xc00214b0e0) Stream removed, broadcasting: 1
I0207 15:11:27.589818       8 log.go:172] (0xc001a2a6e0) (0xc000fe0000) Stream removed, broadcasting: 3
I0207 15:11:27.589831       8 log.go:172] (0xc001a2a6e0) (0xc00231e000) Stream removed, broadcasting: 5
Feb  7 15:11:27.590: INFO: Found all expected endpoints: [netserver-0]
I0207 15:11:27.590861       8 log.go:172] (0xc001a2a6e0) Go away received
Feb  7 15:11:27.602: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2708 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 15:11:27.603: INFO: >>> kubeConfig: /root/.kube/config
I0207 15:11:27.652203       8 log.go:172] (0xc00208a6e0) (0xc00231e280) Create stream
I0207 15:11:27.652270       8 log.go:172] (0xc00208a6e0) (0xc00231e280) Stream added, broadcasting: 1
I0207 15:11:27.661933       8 log.go:172] (0xc00208a6e0) Reply frame received for 1
I0207 15:11:27.661992       8 log.go:172] (0xc00208a6e0) (0xc000fe0280) Create stream
I0207 15:11:27.662010       8 log.go:172] (0xc00208a6e0) (0xc000fe0280) Stream added, broadcasting: 3
I0207 15:11:27.663921       8 log.go:172] (0xc00208a6e0) Reply frame received for 3
I0207 15:11:27.663942       8 log.go:172] (0xc00208a6e0) (0xc002acc280) Create stream
I0207 15:11:27.663964       8 log.go:172] (0xc00208a6e0) (0xc002acc280) Stream added, broadcasting: 5
I0207 15:11:27.665344       8 log.go:172] (0xc00208a6e0) Reply frame received for 5
I0207 15:11:27.802681       8 log.go:172] (0xc00208a6e0) Data frame received for 3
I0207 15:11:27.802721       8 log.go:172] (0xc000fe0280) (3) Data frame handling
I0207 15:11:27.802833       8 log.go:172] (0xc000fe0280) (3) Data frame sent
I0207 15:11:27.938411       8 log.go:172] (0xc00208a6e0) (0xc000fe0280) Stream removed, broadcasting: 3
I0207 15:11:27.938526       8 log.go:172] (0xc00208a6e0) Data frame received for 1
I0207 15:11:27.938570       8 log.go:172] (0xc00208a6e0) (0xc002acc280) Stream removed, broadcasting: 5
I0207 15:11:27.938600       8 log.go:172] (0xc00231e280) (1) Data frame handling
I0207 15:11:27.938619       8 log.go:172] (0xc00231e280) (1) Data frame sent
I0207 15:11:27.938631       8 log.go:172] (0xc00208a6e0) (0xc00231e280) Stream removed, broadcasting: 1
I0207 15:11:27.938648       8 log.go:172] (0xc00208a6e0) Go away received
I0207 15:11:27.938838       8 log.go:172] (0xc00208a6e0) (0xc00231e280) Stream removed, broadcasting: 1
I0207 15:11:27.938883       8 log.go:172] (0xc00208a6e0) (0xc000fe0280) Stream removed, broadcasting: 3
I0207 15:11:27.938905       8 log.go:172] (0xc00208a6e0) (0xc002acc280) Stream removed, broadcasting: 5
Feb  7 15:11:27.938: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:11:27.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2708" for this suite.
Feb  7 15:11:49.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:11:50.068: INFO: namespace pod-network-test-2708 deletion completed in 22.119901081s

• [SLOW TEST:57.276 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:11:50.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6596.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6596.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6596.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6596.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6596.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6596.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 15:12:02.246: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc: the server could not find the requested resource (get pods dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc)
Feb  7 15:12:02.250: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc: the server could not find the requested resource (get pods dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc)
Feb  7 15:12:02.258: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6596.svc.cluster.local from pod dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc: the server could not find the requested resource (get pods dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc)
Feb  7 15:12:02.265: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc: the server could not find the requested resource (get pods dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc)
Feb  7 15:12:02.280: INFO: Unable to read jessie_udp@PodARecord from pod dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc: the server could not find the requested resource (get pods dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc)
Feb  7 15:12:02.293: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc: the server could not find the requested resource (get pods dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc)
Feb  7 15:12:02.293: INFO: Lookups using dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6596.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  7 15:12:07.844: INFO: DNS probes using dns-6596/dns-test-551755c1-486c-48c8-a8cb-847f8acbd2bc succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:12:07.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6596" for this suite.
Feb  7 15:12:14.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:12:14.128: INFO: namespace dns-6596 deletion completed in 6.140907089s

• [SLOW TEST:24.060 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 15:12:14.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-30140828-3a7c-45cf-80d1-65f1c853c211
STEP: Creating a pod to test consume secrets
Feb  7 15:12:14.264: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0" in namespace "projected-1465" to be "success or failure"
Feb  7 15:12:14.270: INFO: Pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194769ms
Feb  7 15:12:16.292: INFO: Pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028067099s
Feb  7 15:12:18.299: INFO: Pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034723294s
Feb  7 15:12:20.309: INFO: Pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045136253s
Feb  7 15:12:22.350: INFO: Pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085900781s
STEP: Saw pod success
Feb  7 15:12:22.350: INFO: Pod "pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0" satisfied condition "success or failure"
Feb  7 15:12:22.354: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0 container secret-volume-test: 
STEP: delete the pod
Feb  7 15:12:22.434: INFO: Waiting for pod pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0 to disappear
Feb  7 15:12:22.448: INFO: Pod pod-projected-secrets-2eb76919-a0b2-4016-8b0c-afcb8ea6dcb0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 15:12:22.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1465" for this suite.
Feb  7 15:12:28.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 15:12:28.657: INFO: namespace projected-1465 deletion completed in 6.177848805s

• [SLOW TEST:14.528 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb  7 15:12:28.658: INFO: Running AfterSuite actions on all nodes
Feb  7 15:12:28.658: INFO: Running AfterSuite actions on node 1
Feb  7 15:12:28.658: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8174.463 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS