I0819 01:59:04.363111 7 e2e.go:243] Starting e2e run "a1cc7ef3-1d45-4f2b-84a5-1babf7a15c67" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597802330 - Will randomize all specs Will run 215 of 4413 specs Aug 19 01:59:05.805: INFO: >>> kubeConfig: /root/.kube/config Aug 19 01:59:05.852: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 19 01:59:06.054: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 19 01:59:06.212: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 19 01:59:06.212: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 19 01:59:06.212: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 19 01:59:06.261: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 19 01:59:06.261: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 19 01:59:06.261: INFO: e2e test version: v1.15.12 Aug 19 01:59:06.266: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 01:59:06.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller Aug 19 01:59:06.370: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 01:59:06.374: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 19 01:59:07.445: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 01:59:07.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-220" for this suite. Aug 19 01:59:15.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 01:59:15.841: INFO: namespace replication-controller-220 deletion completed in 8.226937596s • [SLOW TEST:9.567 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 01:59:15.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 19 01:59:19.996: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ba58b4a9-323f-46f3-9e11-e4ac44aeabc9,GenerateName:,Namespace:events-4154,SelfLink:/api/v1/namespaces/events-4154/pods/send-events-ba58b4a9-323f-46f3-9e11-e4ac44aeabc9,UID:29077d3b-f8af-4daf-a89a-16c8c4ef715c,ResourceVersion:953601,Generation:0,CreationTimestamp:2020-08-19 01:59:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 939815709,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4kls4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4kls4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4kls4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7b21d30} {node.kubernetes.io/unreachable Exists NoExecute 0x7b21d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:59:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:59:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:59:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:59:15 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.159,StartTime:2020-08-19 01:59:15 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-19 01:59:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f3e5157ab68aaa615f814e3cd9936f3c38a2d2a516e965e48dfe90f88cf5e690}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 19 01:59:22.020: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 19 01:59:24.040: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 01:59:24.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4154" for this suite. Aug 19 02:00:04.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:00:04.452: INFO: namespace events-4154 deletion completed in 40.321510758s • [SLOW TEST:48.608 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:00:04.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 19 02:00:04.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e" in namespace "downward-api-4600" to be "success or failure" Aug 19 02:00:04.718: INFO: Pod "downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.344876ms Aug 19 02:00:07.025: INFO: Pod "downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335973276s Aug 19 02:00:09.061: INFO: Pod "downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e": Phase="Running", Reason="", readiness=true. Elapsed: 4.372005065s Aug 19 02:00:11.146: INFO: Pod "downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.456875357s STEP: Saw pod success Aug 19 02:00:11.146: INFO: Pod "downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e" satisfied condition "success or failure" Aug 19 02:00:11.153: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e container client-container: STEP: delete the pod Aug 19 02:00:11.223: INFO: Waiting for pod downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e to disappear Aug 19 02:00:11.790: INFO: Pod downwardapi-volume-b2fb31a0-ef3b-4337-ba79-7a08a3b7d98e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:00:11.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4600" for this suite. Aug 19 02:00:18.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:00:18.529: INFO: namespace downward-api-4600 deletion completed in 6.718520542s • [SLOW TEST:14.076 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:00:18.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4211 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 19 02:00:18.740: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 19 02:00:47.054: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.162:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4211 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 02:00:47.054: INFO: >>> kubeConfig: /root/.kube/config I0819 02:00:47.201860 7 log.go:172] (0x88ba310) (0x88ba380) Create stream I0819 02:00:47.202559 7 log.go:172] (0x88ba310) (0x88ba380) Stream added, broadcasting: 1 I0819 02:00:47.225385 7 log.go:172] (0x88ba310) Reply frame received for 1 I0819 02:00:47.226565 7 log.go:172] (0x88ba310) (0x8cb4000) Create stream I0819 02:00:47.226723 7 log.go:172] (0x88ba310) (0x8cb4000) Stream added, broadcasting: 3 I0819 02:00:47.228944 7 log.go:172] (0x88ba310) Reply frame received for 3 I0819 02:00:47.229194 7 log.go:172] (0x88ba310) (0x8cb4070) Create stream I0819 02:00:47.229259 7 log.go:172] (0x88ba310) (0x8cb4070) Stream added, broadcasting: 5 I0819 02:00:47.230261 7 log.go:172] (0x88ba310) Reply frame received for 5 I0819 02:00:47.293314 7 log.go:172] (0x88ba310) Data frame received for 5 I0819 02:00:47.293584 7 log.go:172] (0x88ba310) Data frame received for 3 I0819 02:00:47.293708 7 log.go:172] (0x8cb4000) (3) Data frame handling I0819 02:00:47.293798 7 log.go:172] (0x8cb4070) (5) Data frame handling I0819 02:00:47.294077 7 log.go:172] (0x88ba310) Data frame received for 1 I0819 02:00:47.294191 7 log.go:172] (0x88ba380) (1) Data frame handling I0819 02:00:47.295042 7 log.go:172] (0x88ba380) (1) Data frame sent I0819 02:00:47.295201 7 log.go:172] (0x8cb4000) (3) Data frame sent I0819 02:00:47.295858 7 log.go:172] (0x88ba310) Data frame received for 3 I0819 02:00:47.295966 7 log.go:172] (0x8cb4000) (3) Data frame handling I0819 02:00:47.296655 7 log.go:172] (0x88ba310) (0x88ba380) Stream removed, broadcasting: 1 I0819 02:00:47.298162 7 log.go:172] (0x88ba310) Go away received I0819 02:00:47.301079 7 log.go:172] (0x88ba310) (0x88ba380) Stream removed, broadcasting: 1 I0819 02:00:47.301309 7 log.go:172] (0x88ba310) (0x8cb4000) Stream removed, broadcasting: 3 I0819 02:00:47.301491 7 log.go:172] (0x88ba310) (0x8cb4070) Stream removed, broadcasting: 5 Aug 19 02:00:47.302: INFO: Found all expected endpoints: [netserver-0] Aug 19 02:00:47.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.12:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4211 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 02:00:47.307: INFO: >>> kubeConfig: /root/.kube/config I0819 02:00:47.401650 7 log.go:172] (0x8cb8380) (0x8cb83f0) Create stream I0819 02:00:47.401779 7 log.go:172] (0x8cb8380) (0x8cb83f0) Stream added, broadcasting: 1 I0819 02:00:47.409182 7 log.go:172] (0x8cb8380) Reply frame received for 1 I0819 02:00:47.409452 7 log.go:172] (0x8cb8380) (0x7fe70a0) Create stream I0819 02:00:47.409597 7 log.go:172] (0x8cb8380) (0x7fe70a0) Stream added, broadcasting: 3 I0819 02:00:47.411126 7 log.go:172] (0x8cb8380) Reply frame received for 3 I0819 02:00:47.411264 7 log.go:172] (0x8cb8380) (0x7fe7110) Create stream I0819 02:00:47.411369 7 log.go:172] (0x8cb8380) (0x7fe7110) Stream added, broadcasting: 5 I0819 02:00:47.412474 7 log.go:172] (0x8cb8380) Reply frame received for 5 I0819 02:00:47.480610 7 log.go:172] (0x8cb8380) Data frame received for 3 I0819 02:00:47.480888 7 log.go:172] (0x7fe70a0) (3) Data frame handling I0819 02:00:47.481033 7 log.go:172] (0x8cb8380) Data frame received for 5 I0819 02:00:47.481219 7 log.go:172] (0x7fe7110) (5) Data frame handling I0819 02:00:47.481400 7 log.go:172] (0x7fe70a0) (3) Data frame sent I0819 02:00:47.481616 7 log.go:172] (0x8cb8380) Data frame received for 3 I0819 02:00:47.481738 7 log.go:172] (0x7fe70a0) (3) Data frame handling I0819 02:00:47.482734 7 log.go:172] (0x8cb8380) Data frame received for 1 I0819 02:00:47.482905 7 log.go:172] (0x8cb83f0) (1) Data frame handling I0819 02:00:47.483074 7 log.go:172] (0x8cb83f0) (1) Data frame sent I0819 02:00:47.483257 7 log.go:172] (0x8cb8380) (0x8cb83f0) Stream removed, broadcasting: 1 I0819 02:00:47.483402 7 log.go:172] (0x8cb8380) Go away received I0819 02:00:47.483899 7 log.go:172] (0x8cb8380) (0x8cb83f0) Stream removed, broadcasting: 1 I0819 02:00:47.484028 7 log.go:172] (0x8cb8380) (0x7fe70a0) Stream removed, broadcasting: 3 I0819 02:00:47.484104 7 log.go:172] (0x8cb8380) (0x7fe7110) Stream removed, broadcasting: 5 Aug 19 02:00:47.484: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:00:47.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4211" for this suite. Aug 19 02:01:11.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:01:11.663: INFO: namespace pod-network-test-4211 deletion completed in 24.161590045s • [SLOW TEST:53.132 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:01:11.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 19 02:01:12.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4866' Aug 19 02:01:17.482: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 19 02:01:17.483: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Aug 19 02:01:17.530: INFO: scanned /root for discovery docs: Aug 19 02:01:17.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4866' Aug 19 02:01:36.599: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 19 02:01:36.599: INFO: stdout: "Created e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0\nScaling up e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Aug 19 02:01:36.599: INFO: stdout: "Created e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0\nScaling up e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Aug 19 02:01:36.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4866' Aug 19 02:01:37.765: INFO: stderr: "" Aug 19 02:01:37.765: INFO: stdout: "e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0-86njh " Aug 19 02:01:37.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0-86njh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4866' Aug 19 02:01:38.909: INFO: stderr: "" Aug 19 02:01:38.910: INFO: stdout: "true" Aug 19 02:01:38.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0-86njh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4866' Aug 19 02:01:40.036: INFO: stderr: "" Aug 19 02:01:40.036: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Aug 19 02:01:40.036: INFO: e2e-test-nginx-rc-e063260f04c56849d7c932150e8982c0-86njh is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Aug 19 02:01:40.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4866' Aug 19 02:01:41.188: INFO: stderr: "" Aug 19 02:01:41.189: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:01:41.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4866" for this suite. Aug 19 02:01:49.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:01:50.477: INFO: namespace kubectl-4866 deletion completed in 9.27715935s • [SLOW TEST:38.814 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:01:50.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-40d55e07-01ff-4bed-8bce-a5be22f58543 STEP: Creating a pod to test consume configMaps Aug 19 02:01:50.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562" in namespace "configmap-1051" to be "success or failure" Aug 19 02:01:50.840: INFO: Pod "pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562": Phase="Pending", Reason="", readiness=false. Elapsed: 9.235136ms Aug 19 02:01:52.899: INFO: Pod "pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068344032s Aug 19 02:01:55.126: INFO: Pod "pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295324628s Aug 19 02:01:57.131: INFO: Pod "pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.300958409s STEP: Saw pod success Aug 19 02:01:57.132: INFO: Pod "pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562" satisfied condition "success or failure" Aug 19 02:01:57.137: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562 container configmap-volume-test: STEP: delete the pod Aug 19 02:01:57.170: INFO: Waiting for pod pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562 to disappear Aug 19 02:01:57.204: INFO: Pod pod-configmaps-49ac1d78-c0e8-45ca-aec1-678641736562 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:01:57.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1051" for this suite. Aug 19 02:02:05.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:02:05.362: INFO: namespace configmap-1051 deletion completed in 8.147539884s • [SLOW TEST:14.882 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:02:05.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 02:02:05.434: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:02:06.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7536" for this suite. Aug 19 02:02:12.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:02:12.795: INFO: namespace custom-resource-definition-7536 deletion completed in 6.235840205s • [SLOW TEST:7.431 seconds] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:02:12.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 19 02:02:12.892: INFO: Waiting up to 5m0s for pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418" in namespace "downward-api-8685" to be "success or failure" Aug 19 02:02:12.899: INFO: Pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379677ms Aug 19 02:02:14.924: INFO: Pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031729451s Aug 19 02:02:16.931: INFO: Pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038609041s Aug 19 02:02:18.941: INFO: Pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418": Phase="Running", Reason="", readiness=true. Elapsed: 6.048989766s Aug 19 02:02:21.463: INFO: Pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.57031489s STEP: Saw pod success Aug 19 02:02:21.463: INFO: Pod "downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418" satisfied condition "success or failure" Aug 19 02:02:21.475: INFO: Trying to get logs from node iruya-worker pod downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418 container dapi-container: STEP: delete the pod Aug 19 02:02:23.284: INFO: Waiting for pod downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418 to disappear Aug 19 02:02:23.649: INFO: Pod downward-api-1fcc3d9e-9fa2-45cb-83e6-9026639d6418 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:02:23.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8685" for this suite. Aug 19 02:02:31.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:02:32.182: INFO: namespace downward-api-8685 deletion completed in 8.517079873s • [SLOW TEST:19.382 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:02:32.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 19 02:02:33.011: INFO: Waiting up to 5m0s for pod "pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0" in namespace "emptydir-8650" to be "success or failure" Aug 19 02:02:33.026: INFO: Pod "pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.789168ms Aug 19 02:02:35.032: INFO: Pod "pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020855689s Aug 19 02:02:37.038: INFO: Pod "pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.027347838s Aug 19 02:02:39.047: INFO: Pod "pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035688729s STEP: Saw pod success Aug 19 02:02:39.047: INFO: Pod "pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0" satisfied condition "success or failure" Aug 19 02:02:39.053: INFO: Trying to get logs from node iruya-worker2 pod pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0 container test-container: STEP: delete the pod Aug 19 02:02:39.076: INFO: Waiting for pod pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0 to disappear Aug 19 02:02:39.094: INFO: Pod pod-03c900d1-dbfb-4342-bd13-79f5ce7d72d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:02:39.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8650" for this suite. Aug 19 02:02:45.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:02:45.236: INFO: namespace emptydir-8650 deletion completed in 6.133240175s • [SLOW TEST:13.053 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:02:45.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 02:02:45.389: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 19 02:02:50.395: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 19 02:02:50.396: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 19 02:02:52.402: INFO: Creating deployment "test-rollover-deployment" Aug 19 02:02:52.593: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 19 02:02:54.644: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 19 02:02:54.858: INFO: Ensure that both replica sets have 1 created replica Aug 19 02:02:54.867: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 19 02:02:54.876: INFO: Updating deployment test-rollover-deployment Aug 19 02:02:54.877: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 19 02:02:56.958: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 19 02:02:56.966: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 19 02:02:56.975: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:02:56.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399375, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:02:59.176: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:02:59.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399375, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:00.991: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:03:00.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:02.987: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:03:02.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:04.987: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:03:04.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:07.001: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:03:07.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:08.986: INFO: all replica sets need to contain the pod-template-hash label Aug 19 02:03:08.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:11.092: INFO: Aug 19 02:03:11.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399373, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733399372, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 02:03:12.989: INFO: Aug 19 02:03:12.989: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 19 02:03:13.011: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5375,SelfLink:/apis/apps/v1/namespaces/deployment-5375/deployments/test-rollover-deployment,UID:431e3cfd-4cf5-4a2e-8069-382228f8ebd2,ResourceVersion:954855,Generation:2,CreationTimestamp:2020-08-19 02:02:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-19 02:02:53 +0000 UTC 2020-08-19 02:02:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-19 02:03:11 +0000 UTC 2020-08-19 02:02:52 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 19 02:03:13.018: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5375,SelfLink:/apis/apps/v1/namespaces/deployment-5375/replicasets/test-rollover-deployment-854595fc44,UID:b18cc57c-5d81-4a02-b49c-0d0751db9f5a,ResourceVersion:954842,Generation:2,CreationTimestamp:2020-08-19 02:02:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 431e3cfd-4cf5-4a2e-8069-382228f8ebd2 0x8a5c1d7 0x8a5c1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 19 02:03:13.018: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 19 02:03:13.019: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5375,SelfLink:/apis/apps/v1/namespaces/deployment-5375/replicasets/test-rollover-controller,UID:daded851-7b05-4a07-974d-5d0915e05ab8,ResourceVersion:954853,Generation:2,CreationTimestamp:2020-08-19 02:02:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 431e3cfd-4cf5-4a2e-8069-382228f8ebd2 0x8a5c107 0x8a5c108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 19 02:03:13.020: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5375,SelfLink:/apis/apps/v1/namespaces/deployment-5375/replicasets/test-rollover-deployment-9b8b997cf,UID:264ef6b4-b799-408c-a241-0b39fae8f13e,ResourceVersion:954758,Generation:2,CreationTimestamp:2020-08-19 02:02:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 431e3cfd-4cf5-4a2e-8069-382228f8ebd2 0x8a5c2a0 0x8a5c2a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 19 02:03:13.027: INFO: Pod "test-rollover-deployment-854595fc44-2lqrd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-2lqrd,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5375,SelfLink:/api/v1/namespaces/deployment-5375/pods/test-rollover-deployment-854595fc44-2lqrd,UID:74ab9dfb-537d-42f3-a8d5-77c9915b8e65,ResourceVersion:954794,Generation:0,CreationTimestamp:2020-08-19 02:02:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 b18cc57c-5d81-4a02-b49c-0d0751db9f5a 0x8af84a7 0x8af84a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lx9cp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lx9cp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lx9cp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8af8520} {node.kubernetes.io/unreachable Exists NoExecute 0x8af8540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:02:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:03:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:03:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:02:55 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.171,StartTime:2020-08-19 02:02:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-19 02:02:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f1a42caf6999899a805164936dfa4bf5786e12f664be46ea9a09b3bca4fe68fa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:03:13.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5375" for this suite. Aug 19 02:03:27.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:03:27.159: INFO: namespace deployment-5375 deletion completed in 14.125014143s • [SLOW TEST:41.922 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:03:27.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-ae5e9ffc-555c-4e6c-b69a-bed5fe49af56 in namespace container-probe-2595 Aug 19 02:03:33.859: INFO: Started pod test-webserver-ae5e9ffc-555c-4e6c-b69a-bed5fe49af56 in namespace container-probe-2595 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 02:03:33.864: INFO: Initial restart count of pod test-webserver-ae5e9ffc-555c-4e6c-b69a-bed5fe49af56 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:07:34.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2595" for this suite. Aug 19 02:07:41.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:07:41.179: INFO: namespace container-probe-2595 deletion completed in 6.148245487s • [SLOW TEST:254.019 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:07:41.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 19 02:07:46.363: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:07:47.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4730" for this suite. Aug 19 02:08:11.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:08:11.538: INFO: namespace replicaset-4730 deletion completed in 24.126223117s • [SLOW TEST:30.357 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:08:11.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Aug 19 02:08:11.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Aug 19 02:08:13.425: INFO: stderr: "" Aug 19 02:08:13.425: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 19 02:08:14.431: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:14.432: INFO: Found 0 / 1 Aug 19 02:08:15.433: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:15.433: INFO: Found 0 / 1 Aug 19 02:08:16.430: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:16.430: INFO: Found 0 / 1 Aug 19 02:08:17.431: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:17.431: INFO: Found 0 / 1 Aug 19 02:08:18.432: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:18.432: INFO: Found 0 / 1 Aug 19 02:08:19.474: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:19.474: INFO: Found 1 / 1 Aug 19 02:08:19.475: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 19 02:08:19.535: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:19.535: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 19 02:08:19.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-w6rnf --namespace=kubectl-5123 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 19 02:08:20.977: INFO: stderr: "" Aug 19 02:08:20.977: INFO: stdout: "pod/redis-master-w6rnf patched\n" STEP: checking annotations Aug 19 02:08:21.002: INFO: Selector matched 1 pods for map[app:redis] Aug 19 02:08:21.002: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:08:21.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5123" for this suite. Aug 19 02:08:43.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:08:43.478: INFO: namespace kubectl-5123 deletion completed in 22.466506018s • [SLOW TEST:31.939 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:08:43.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:08:51.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9176" for this suite. Aug 19 02:08:59.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:08:59.487: INFO: namespace namespaces-9176 deletion completed in 8.347784366s STEP: Destroying namespace "nsdeletetest-7290" for this suite. Aug 19 02:08:59.489: INFO: Namespace nsdeletetest-7290 was already deleted STEP: Destroying namespace "nsdeletetest-2783" for this suite. Aug 19 02:09:05.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:09:05.840: INFO: namespace nsdeletetest-2783 deletion completed in 6.349982695s • [SLOW TEST:22.360 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:09:05.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:09:11.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-266" for this suite. Aug 19 02:09:17.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:09:17.898: INFO: namespace watch-266 deletion completed in 6.237375914s • [SLOW TEST:12.057 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:09:17.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-sz4t STEP: Creating a pod to test atomic-volume-subpath Aug 19 02:09:18.032: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sz4t" in namespace "subpath-9286" to be "success or failure" Aug 19 02:09:18.057: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Pending", Reason="", readiness=false. Elapsed: 25.677447ms Aug 19 02:09:20.063: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031227722s Aug 19 02:09:22.069: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 4.037463201s Aug 19 02:09:24.076: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 6.044136554s Aug 19 02:09:26.087: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 8.055391222s Aug 19 02:09:28.094: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 10.06194197s Aug 19 02:09:30.103: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 12.070961544s Aug 19 02:09:32.108: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 14.0761574s Aug 19 02:09:34.145: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 16.112765709s Aug 19 02:09:36.229: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 18.197128057s Aug 19 02:09:38.234: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 20.202313668s Aug 19 02:09:40.240: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 22.2077767s Aug 19 02:09:42.245: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Running", Reason="", readiness=true. Elapsed: 24.212969294s Aug 19 02:09:44.250: INFO: Pod "pod-subpath-test-configmap-sz4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.218594569s STEP: Saw pod success Aug 19 02:09:44.250: INFO: Pod "pod-subpath-test-configmap-sz4t" satisfied condition "success or failure" Aug 19 02:09:44.253: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-sz4t container test-container-subpath-configmap-sz4t: STEP: delete the pod Aug 19 02:09:44.354: INFO: Waiting for pod pod-subpath-test-configmap-sz4t to disappear Aug 19 02:09:44.388: INFO: Pod pod-subpath-test-configmap-sz4t no longer exists STEP: Deleting pod pod-subpath-test-configmap-sz4t Aug 19 02:09:44.388: INFO: Deleting pod "pod-subpath-test-configmap-sz4t" in namespace "subpath-9286" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:09:44.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9286" for this suite. Aug 19 02:09:50.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:09:50.545: INFO: namespace subpath-9286 deletion completed in 6.143679277s • [SLOW TEST:32.646 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:09:50.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 02:09:50.625: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:09:56.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5502" for this suite. Aug 19 02:10:49.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:10:50.285: INFO: namespace pods-5502 deletion completed in 53.347181955s • [SLOW TEST:59.739 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:10:50.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 02:10:51.251: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:10:59.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2823" for this suite. Aug 19 02:11:45.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:11:45.959: INFO: namespace pods-2823 deletion completed in 46.213618068s • [SLOW TEST:55.672 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:11:45.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 19 02:11:47.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115" in namespace "downward-api-3515" to be "success or failure" Aug 19 02:11:47.386: INFO: Pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115": Phase="Pending", Reason="", readiness=false. Elapsed: 323.943671ms Aug 19 02:11:49.393: INFO: Pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330875667s Aug 19 02:11:51.628: INFO: Pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565222317s Aug 19 02:11:53.770: INFO: Pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707661117s Aug 19 02:11:55.777: INFO: Pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.714989239s STEP: Saw pod success Aug 19 02:11:55.778: INFO: Pod "downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115" satisfied condition "success or failure" Aug 19 02:11:55.784: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115 container client-container: STEP: delete the pod Aug 19 02:11:56.052: INFO: Waiting for pod downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115 to disappear Aug 19 02:11:56.249: INFO: Pod downwardapi-volume-d315fcd0-654f-4272-a026-d4f2cc1aa115 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:11:56.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3515" for this suite. Aug 19 02:12:04.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:12:04.430: INFO: namespace downward-api-3515 deletion completed in 8.172067735s • [SLOW TEST:18.468 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:12:04.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-75/configmap-test-7644acc6-085e-4fee-9d83-e3cd2c019a26 STEP: Creating a pod to test consume configMaps Aug 19 02:12:05.106: INFO: Waiting up to 5m0s for pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d" in namespace "configmap-75" to be "success or failure" Aug 19 02:12:05.256: INFO: Pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d": Phase="Pending", Reason="", readiness=false. Elapsed: 149.597812ms Aug 19 02:12:07.345: INFO: Pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238588001s Aug 19 02:12:09.599: INFO: Pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492925484s Aug 19 02:12:11.638: INFO: Pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d": Phase="Running", Reason="", readiness=true. Elapsed: 6.531586885s Aug 19 02:12:13.645: INFO: Pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.538617809s STEP: Saw pod success Aug 19 02:12:13.645: INFO: Pod "pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d" satisfied condition "success or failure" Aug 19 02:12:13.650: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d container env-test: STEP: delete the pod Aug 19 02:12:13.729: INFO: Waiting for pod pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d to disappear Aug 19 02:12:13.733: INFO: Pod pod-configmaps-fcb6daab-c8f0-4a39-8f89-df7555edbe2d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 02:12:13.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-75" for this suite. Aug 19 02:12:21.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 02:12:22.075: INFO: namespace configmap-75 deletion completed in 8.334154184s • [SLOW TEST:17.644 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 02:12:22.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 02:12:22.523: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0819 02:13:00.356251       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 02:13:00.357: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:13:00.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1611" for this suite.
Aug 19 02:13:08.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:13:08.583: INFO: namespace gc-1611 deletion completed in 8.220104561s

• [SLOW TEST:39.082 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:13:08.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1410
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 02:13:08.701: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 02:13:42.873: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.181 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1410 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 02:13:42.873: INFO: >>> kubeConfig: /root/.kube/config
I0819 02:13:42.978112       7 log.go:172] (0x6ded960) (0x6deda40) Create stream
I0819 02:13:42.978520       7 log.go:172] (0x6ded960) (0x6deda40) Stream added, broadcasting: 1
I0819 02:13:42.983791       7 log.go:172] (0x6ded960) Reply frame received for 1
I0819 02:13:42.984030       7 log.go:172] (0x6ded960) (0x6dedb20) Create stream
I0819 02:13:42.984137       7 log.go:172] (0x6ded960) (0x6dedb20) Stream added, broadcasting: 3
I0819 02:13:42.985870       7 log.go:172] (0x6ded960) Reply frame received for 3
I0819 02:13:42.986101       7 log.go:172] (0x6ded960) (0x6dedc00) Create stream
I0819 02:13:42.986206       7 log.go:172] (0x6ded960) (0x6dedc00) Stream added, broadcasting: 5
I0819 02:13:42.988038       7 log.go:172] (0x6ded960) Reply frame received for 5
I0819 02:13:44.055027       7 log.go:172] (0x6ded960) Data frame received for 5
I0819 02:13:44.055308       7 log.go:172] (0x6dedc00) (5) Data frame handling
I0819 02:13:44.055525       7 log.go:172] (0x6ded960) Data frame received for 3
I0819 02:13:44.055715       7 log.go:172] (0x6dedb20) (3) Data frame handling
I0819 02:13:44.055890       7 log.go:172] (0x6dedb20) (3) Data frame sent
I0819 02:13:44.056006       7 log.go:172] (0x6ded960) Data frame received for 3
I0819 02:13:44.056150       7 log.go:172] (0x6dedb20) (3) Data frame handling
I0819 02:13:44.057037       7 log.go:172] (0x6ded960) Data frame received for 1
I0819 02:13:44.057200       7 log.go:172] (0x6deda40) (1) Data frame handling
I0819 02:13:44.057351       7 log.go:172] (0x6deda40) (1) Data frame sent
I0819 02:13:44.057506       7 log.go:172] (0x6ded960) (0x6deda40) Stream removed, broadcasting: 1
I0819 02:13:44.057666       7 log.go:172] (0x6ded960) Go away received
I0819 02:13:44.058220       7 log.go:172] (0x6ded960) (0x6deda40) Stream removed, broadcasting: 1
I0819 02:13:44.058358       7 log.go:172] (0x6ded960) (0x6dedb20) Stream removed, broadcasting: 3
I0819 02:13:44.058451       7 log.go:172] (0x6ded960) (0x6dedc00) Stream removed, broadcasting: 5
Aug 19 02:13:44.058: INFO: Found all expected endpoints: [netserver-0]
Aug 19 02:13:44.064: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.30 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1410 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 02:13:44.064: INFO: >>> kubeConfig: /root/.kube/config
I0819 02:13:44.164164       7 log.go:172] (0x6820540) (0x68208c0) Create stream
I0819 02:13:44.164444       7 log.go:172] (0x6820540) (0x68208c0) Stream added, broadcasting: 1
I0819 02:13:44.168120       7 log.go:172] (0x6820540) Reply frame received for 1
I0819 02:13:44.168384       7 log.go:172] (0x6820540) (0x6820c40) Create stream
I0819 02:13:44.168521       7 log.go:172] (0x6820540) (0x6820c40) Stream added, broadcasting: 3
I0819 02:13:44.170783       7 log.go:172] (0x6820540) Reply frame received for 3
I0819 02:13:44.170957       7 log.go:172] (0x6820540) (0x6f309a0) Create stream
I0819 02:13:44.171063       7 log.go:172] (0x6820540) (0x6f309a0) Stream added, broadcasting: 5
I0819 02:13:44.172476       7 log.go:172] (0x6820540) Reply frame received for 5
I0819 02:13:45.230608       7 log.go:172] (0x6820540) Data frame received for 3
I0819 02:13:45.230911       7 log.go:172] (0x6820c40) (3) Data frame handling
I0819 02:13:45.231124       7 log.go:172] (0x6820540) Data frame received for 5
I0819 02:13:45.231319       7 log.go:172] (0x6f309a0) (5) Data frame handling
I0819 02:13:45.231488       7 log.go:172] (0x6820c40) (3) Data frame sent
I0819 02:13:45.231593       7 log.go:172] (0x6820540) Data frame received for 3
I0819 02:13:45.231676       7 log.go:172] (0x6820c40) (3) Data frame handling
I0819 02:13:45.232958       7 log.go:172] (0x6820540) Data frame received for 1
I0819 02:13:45.233097       7 log.go:172] (0x68208c0) (1) Data frame handling
I0819 02:13:45.233225       7 log.go:172] (0x68208c0) (1) Data frame sent
I0819 02:13:45.233400       7 log.go:172] (0x6820540) (0x68208c0) Stream removed, broadcasting: 1
I0819 02:13:45.233576       7 log.go:172] (0x6820540) Go away received
I0819 02:13:45.233955       7 log.go:172] (0x6820540) (0x68208c0) Stream removed, broadcasting: 1
I0819 02:13:45.234145       7 log.go:172] (0x6820540) (0x6820c40) Stream removed, broadcasting: 3
I0819 02:13:45.234262       7 log.go:172] (0x6820540) (0x6f309a0) Stream removed, broadcasting: 5
Aug 19 02:13:45.234: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:13:45.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1410" for this suite.
Aug 19 02:14:09.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:14:09.424: INFO: namespace pod-network-test-1410 deletion completed in 24.176715822s

• [SLOW TEST:60.840 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:14:09.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 02:14:09.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d" in namespace "downward-api-5419" to be "success or failure"
Aug 19 02:14:09.509: INFO: Pod "downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.54385ms
Aug 19 02:14:11.929: INFO: Pod "downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428106691s
Aug 19 02:14:13.937: INFO: Pod "downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435933899s
Aug 19 02:14:15.982: INFO: Pod "downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.480772547s
STEP: Saw pod success
Aug 19 02:14:15.982: INFO: Pod "downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d" satisfied condition "success or failure"
Aug 19 02:14:15.987: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d container client-container: 
STEP: delete the pod
Aug 19 02:14:16.242: INFO: Waiting for pod downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d to disappear
Aug 19 02:14:16.461: INFO: Pod downwardapi-volume-afef057c-ee41-4c86-9ef2-43ba19be398d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:14:16.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5419" for this suite.
Aug 19 02:14:24.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:14:25.141: INFO: namespace downward-api-5419 deletion completed in 8.668017007s

• [SLOW TEST:15.713 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:14:25.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:14:25.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5784'
Aug 19 02:14:36.470: INFO: stderr: ""
Aug 19 02:14:36.470: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 19 02:14:36.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5784'
Aug 19 02:14:38.268: INFO: stderr: ""
Aug 19 02:14:38.268: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 19 02:14:39.293: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:14:39.293: INFO: Found 0 / 1
Aug 19 02:14:40.363: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:14:40.363: INFO: Found 0 / 1
Aug 19 02:14:41.303: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:14:41.303: INFO: Found 0 / 1
Aug 19 02:14:42.390: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:14:42.390: INFO: Found 1 / 1
Aug 19 02:14:42.390: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 19 02:14:42.397: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:14:42.397: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 19 02:14:42.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xn862 --namespace=kubectl-5784'
Aug 19 02:14:44.180: INFO: stderr: ""
Aug 19 02:14:44.180: INFO: stdout: "Name:           redis-master-xn862\nNamespace:      kubectl-5784\nPriority:       0\nNode:           iruya-worker/172.18.0.9\nStart Time:     Wed, 19 Aug 2020 02:14:36 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.183\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://ac4717e40286b0b54f000a9a76293e3e6c441dc9efcb2dcb4e96ae448c710cce\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 19 Aug 2020 02:14:40 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6nfd (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-b6nfd:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-b6nfd\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  8s    default-scheduler      Successfully assigned kubectl-5784/redis-master-xn862 to iruya-worker\n  Normal  Pulled     7s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    4s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    4s    kubelet, iruya-worker  Started container redis-master\n"
Aug 19 02:14:44.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5784'
Aug 19 02:14:45.440: INFO: stderr: ""
Aug 19 02:14:45.440: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-5784\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-xn862\n"
Aug 19 02:14:45.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5784'
Aug 19 02:14:46.603: INFO: stderr: ""
Aug 19 02:14:46.603: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-5784\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.208.55\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.183:6379\nSession Affinity:  None\nEvents:            \n"
Aug 19 02:14:46.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 19 02:14:47.833: INFO: stderr: ""
Aug 19 02:14:47.833: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:34:51 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 19 Aug 2020 02:14:12 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 19 Aug 2020 02:14:12 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 19 Aug 2020 02:14:12 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 19 Aug 2020 02:14:12 +0000   Sat, 15 Aug 2020 09:35:31 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 3ed9130db08840259d2231bd97220883\n System UUID:                e52cc602-b019-45cd-b06f-235cc5705532\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-6krdd                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d16h\n  kube-system                coredns-5d4dd4b4db-htp88                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d16h\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kindnet-gvnsh                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      3d16h\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kube-proxy-ndl9h                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         3d16h\n  local-path-storage         local-path-provisioner-668779bd7-g227z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d16h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 19 02:14:47.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5784'
Aug 19 02:14:49.526: INFO: stderr: ""
Aug 19 02:14:49.526: INFO: stdout: "Name:         kubectl-5784\nLabels:       e2e-framework=kubectl\n              e2e-run=a1cc7ef3-1d45-4f2b-84a5-1babf7a15c67\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:14:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5784" for this suite.
Aug 19 02:15:17.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:15:18.085: INFO: namespace kubectl-5784 deletion completed in 28.486340426s

• [SLOW TEST:52.942 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:15:18.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Aug 19 02:15:19.565: INFO: created pod pod-service-account-defaultsa
Aug 19 02:15:19.566: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 19 02:15:19.625: INFO: created pod pod-service-account-mountsa
Aug 19 02:15:19.625: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 19 02:15:20.177: INFO: created pod pod-service-account-nomountsa
Aug 19 02:15:20.177: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 19 02:15:20.249: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 19 02:15:20.249: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 19 02:15:20.410: INFO: created pod pod-service-account-mountsa-mountspec
Aug 19 02:15:20.410: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 19 02:15:20.827: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 19 02:15:20.827: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 19 02:15:20.835: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 19 02:15:20.835: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 19 02:15:20.885: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 19 02:15:20.885: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 19 02:15:21.635: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 19 02:15:21.635: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:15:21.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-965" for this suite.
Aug 19 02:16:03.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:16:04.476: INFO: namespace svcaccounts-965 deletion completed in 42.322045836s

• [SLOW TEST:46.391 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:16:04.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 19 02:16:11.230: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5376 pod-service-account-307830f6-daac-4acb-b0ea-d9f72d76feaf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 19 02:16:12.878: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5376 pod-service-account-307830f6-daac-4acb-b0ea-d9f72d76feaf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 19 02:16:14.257: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5376 pod-service-account-307830f6-daac-4acb-b0ea-d9f72d76feaf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:16:15.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5376" for this suite.
Aug 19 02:16:21.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:16:22.035: INFO: namespace svcaccounts-5376 deletion completed in 6.346188521s

• [SLOW TEST:17.556 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:16:22.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-bd4314d5-08c3-4de8-ba71-42fadd553e01
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-bd4314d5-08c3-4de8-ba71-42fadd553e01
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:16:28.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6709" for this suite.
Aug 19 02:16:52.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:16:52.860: INFO: namespace projected-6709 deletion completed in 24.471694291s

• [SLOW TEST:30.819 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:16:52.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 19 02:16:52.952: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 19 02:16:52.973: INFO: Waiting for terminating namespaces to be deleted...
Aug 19 02:16:52.979: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 19 02:16:52.994: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:16:52.994: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 02:16:52.994: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:16:52.994: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 02:16:52.995: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 19 02:16:53.005: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:16:53.005: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 02:16:53.005: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:16:53.005: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Aug 19 02:16:53.170: INFO: Pod kindnet-nkf5n requesting resource cpu=100m on Node iruya-worker
Aug 19 02:16:53.170: INFO: Pod kindnet-xsdzz requesting resource cpu=100m on Node iruya-worker2
Aug 19 02:16:53.170: INFO: Pod kube-proxy-5zw8s requesting resource cpu=0m on Node iruya-worker
Aug 19 02:16:53.170: INFO: Pod kube-proxy-b98qt requesting resource cpu=0m on Node iruya-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c.162c89bd1fb53e6e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6151/filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c.162c89bd732dc8ba], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c.162c89bdda2d58df], Reason = [Created], Message = [Created container filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c.162c89bdeaafc206], Reason = [Started], Message = [Started container filler-pod-4a50d35e-1e49-425c-9f4a-17db4acd1c8c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449.162c89bd23ea6635], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6151/filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449.162c89bdbaf33706], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449.162c89bdfec40b74], Reason = [Created], Message = [Created container filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449.162c89be110a52ba], Reason = [Started], Message = [Started container filler-pod-b45f0bea-6851-4ac8-bdfb-9a5554f05449]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162c89be923c0af3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:17:00.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6151" for this suite.
Aug 19 02:17:14.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:17:15.150: INFO: namespace sched-pred-6151 deletion completed in 14.246936287s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:22.290 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:17:15.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f
Aug 19 02:17:15.808: INFO: Pod name my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f: Found 0 pods out of 1
Aug 19 02:17:20.816: INFO: Pod name my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f: Found 1 pods out of 1
Aug 19 02:17:20.816: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f" are running
Aug 19 02:17:27.269: INFO: Pod "my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f-r97h8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:17:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:17:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:17:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:17:15 +0000 UTC Reason: Message:}])
Aug 19 02:17:27.270: INFO: Trying to dial the pod
Aug 19 02:17:32.608: INFO: Controller my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f: Got expected result from replica 1 [my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f-r97h8]: "my-hostname-basic-62315f7c-2857-4671-a500-e9562aaf934f-r97h8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:17:32.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3265" for this suite.
Aug 19 02:17:38.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:17:38.792: INFO: namespace replication-controller-3265 deletion completed in 6.174728373s

• [SLOW TEST:23.641 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:17:38.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-131500a6-2f86-45c9-bb22-f8f8e1236a10
STEP: Creating a pod to test consume secrets
Aug 19 02:17:38.966: INFO: Waiting up to 5m0s for pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92" in namespace "secrets-2361" to be "success or failure"
Aug 19 02:17:39.074: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92": Phase="Pending", Reason="", readiness=false. Elapsed: 107.789258ms
Aug 19 02:17:41.079: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112709789s
Aug 19 02:17:43.146: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18028564s
Aug 19 02:17:45.343: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377204939s
Aug 19 02:17:47.458: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92": Phase="Running", Reason="", readiness=true. Elapsed: 8.491972293s
Aug 19 02:17:49.470: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504003544s
STEP: Saw pod success
Aug 19 02:17:49.470: INFO: Pod "pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92" satisfied condition "success or failure"
Aug 19 02:17:49.486: INFO: Trying to get logs from node iruya-worker pod pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92 container secret-env-test: 
STEP: delete the pod
Aug 19 02:17:50.262: INFO: Waiting for pod pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92 to disappear
Aug 19 02:17:50.287: INFO: Pod pod-secrets-bf09682d-ff5a-4837-ac84-750ed9b43a92 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:17:50.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2361" for this suite.
Aug 19 02:17:56.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:17:56.582: INFO: namespace secrets-2361 deletion completed in 6.282702281s

• [SLOW TEST:17.788 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:17:56.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug 19 02:17:57.003: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix184241454/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:17:57.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7827" for this suite.
Aug 19 02:18:05.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:18:05.456: INFO: namespace kubectl-7827 deletion completed in 6.786128309s

• [SLOW TEST:8.871 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:18:05.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:18:14.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2429" for this suite.
Aug 19 02:19:05.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:19:05.916: INFO: namespace kubelet-test-2429 deletion completed in 50.708422238s

• [SLOW TEST:60.458 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:19:05.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 02:19:06.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935" in namespace "downward-api-7694" to be "success or failure"
Aug 19 02:19:06.110: INFO: Pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935": Phase="Pending", Reason="", readiness=false. Elapsed: 13.960901ms
Aug 19 02:19:08.118: INFO: Pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021148111s
Aug 19 02:19:10.400: INFO: Pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304054314s
Aug 19 02:19:12.509: INFO: Pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412395507s
Aug 19 02:19:15.736: INFO: Pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.6392116s
STEP: Saw pod success
Aug 19 02:19:15.736: INFO: Pod "downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935" satisfied condition "success or failure"
Aug 19 02:19:15.757: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935 container client-container: 
STEP: delete the pod
Aug 19 02:19:16.771: INFO: Waiting for pod downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935 to disappear
Aug 19 02:19:16.798: INFO: Pod downwardapi-volume-838972df-8d0a-4a72-a87f-7f6d7b06d935 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:19:16.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7694" for this suite.
Aug 19 02:19:22.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:19:23.109: INFO: namespace downward-api-7694 deletion completed in 6.169034316s

• [SLOW TEST:17.186 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:19:23.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:19:23.233: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 19 02:19:28.241: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 19 02:19:30.550: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 02:19:38.962: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1053,SelfLink:/apis/apps/v1/namespaces/deployment-1053/deployments/test-cleanup-deployment,UID:794a22f3-d67a-49be-9ce0-c1f60cf4a2c5,ResourceVersion:958918,Generation:1,CreationTimestamp:2020-08-19 02:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-19 02:19:31 +0000 UTC 2020-08-19 02:19:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-19 02:19:37 +0000 UTC 2020-08-19 02:19:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 19 02:19:38.970: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-1053,SelfLink:/apis/apps/v1/namespaces/deployment-1053/replicasets/test-cleanup-deployment-55bbcbc84c,UID:ac686024-4a8b-4af7-bd75-863e4cde271d,ResourceVersion:958907,Generation:1,CreationTimestamp:2020-08-19 02:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 794a22f3-d67a-49be-9ce0-c1f60cf4a2c5 0x86c1387 0x86c1388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 19 02:19:38.977: INFO: Pod "test-cleanup-deployment-55bbcbc84c-cpw5m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-cpw5m,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-1053,SelfLink:/api/v1/namespaces/deployment-1053/pods/test-cleanup-deployment-55bbcbc84c-cpw5m,UID:61c99df0-0484-482e-8817-3db197e2b699,ResourceVersion:958906,Generation:0,CreationTimestamp:2020-08-19 02:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c ac686024-4a8b-4af7-bd75-863e4cde271d 0x86c19c7 0x86c19c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pbv47 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pbv47,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pbv47 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x86c1a40} {node.kubernetes.io/unreachable Exists  NoExecute 0x86c1a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:19:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:19:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:19:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:19:30 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.196,StartTime:2020-08-19 02:19:31 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-19 02:19:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://cbb6b406d5645bede7b636d89540a3efd7b91d86cd0bf8ba7d1a08d8877c62c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:19:38.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1053" for this suite.
Aug 19 02:19:55.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:19:56.491: INFO: namespace deployment-1053 deletion completed in 17.504225137s

• [SLOW TEST:33.379 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:19:56.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-bb4d1626-a7fc-49bb-9221-a0e5fb90b4ff
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:20:09.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1525" for this suite.
Aug 19 02:20:33.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:20:33.816: INFO: namespace configmap-1525 deletion completed in 24.429343369s

• [SLOW TEST:37.320 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:20:33.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 19 02:20:46.544: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:46.623: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:20:48.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:48.630: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:20:50.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:50.630: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:20:52.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:52.631: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:20:54.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:54.631: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:20:56.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:56.630: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:20:58.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:20:58.630: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:00.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:00.629: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:02.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:02.630: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:04.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:04.631: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:06.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:06.632: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:08.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:08.632: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:10.623: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:10.630: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:12.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:12.634: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 02:21:14.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 02:21:14.640: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:21:14.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2318" for this suite.
Aug 19 02:21:36.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:21:36.918: INFO: namespace container-lifecycle-hook-2318 deletion completed in 22.261350702s

• [SLOW TEST:63.100 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:21:36.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 19 02:21:38.509: INFO: Pod name wrapped-volume-race-04a6dc96-f2a8-4b74-9ce5-074cd8d5e37b: Found 0 pods out of 5
Aug 19 02:21:43.520: INFO: Pod name wrapped-volume-race-04a6dc96-f2a8-4b74-9ce5-074cd8d5e37b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-04a6dc96-f2a8-4b74-9ce5-074cd8d5e37b in namespace emptydir-wrapper-3147, will wait for the garbage collector to delete the pods
Aug 19 02:21:59.638: INFO: Deleting ReplicationController wrapped-volume-race-04a6dc96-f2a8-4b74-9ce5-074cd8d5e37b took: 6.797006ms
Aug 19 02:21:59.940: INFO: Terminating ReplicationController wrapped-volume-race-04a6dc96-f2a8-4b74-9ce5-074cd8d5e37b pods took: 301.472487ms
STEP: Creating RC which spawns configmap-volume pods
Aug 19 02:22:44.934: INFO: Pod name wrapped-volume-race-5a81b6a1-6e7b-4c44-8274-e8aaded681ea: Found 0 pods out of 5
Aug 19 02:22:49.952: INFO: Pod name wrapped-volume-race-5a81b6a1-6e7b-4c44-8274-e8aaded681ea: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5a81b6a1-6e7b-4c44-8274-e8aaded681ea in namespace emptydir-wrapper-3147, will wait for the garbage collector to delete the pods
Aug 19 02:23:06.059: INFO: Deleting ReplicationController wrapped-volume-race-5a81b6a1-6e7b-4c44-8274-e8aaded681ea took: 8.874869ms
Aug 19 02:23:06.360: INFO: Terminating ReplicationController wrapped-volume-race-5a81b6a1-6e7b-4c44-8274-e8aaded681ea pods took: 300.744549ms
STEP: Creating RC which spawns configmap-volume pods
Aug 19 02:23:43.522: INFO: Pod name wrapped-volume-race-945dcc1f-ff9a-47cb-9ce1-c8975f837714: Found 0 pods out of 5
Aug 19 02:23:48.544: INFO: Pod name wrapped-volume-race-945dcc1f-ff9a-47cb-9ce1-c8975f837714: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-945dcc1f-ff9a-47cb-9ce1-c8975f837714 in namespace emptydir-wrapper-3147, will wait for the garbage collector to delete the pods
Aug 19 02:24:06.936: INFO: Deleting ReplicationController wrapped-volume-race-945dcc1f-ff9a-47cb-9ce1-c8975f837714 took: 7.334413ms
Aug 19 02:24:07.237: INFO: Terminating ReplicationController wrapped-volume-race-945dcc1f-ff9a-47cb-9ce1-c8975f837714 pods took: 300.883253ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:24:48.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3147" for this suite.
Aug 19 02:24:58.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:24:58.388: INFO: namespace emptydir-wrapper-3147 deletion completed in 10.126772376s

• [SLOW TEST:201.468 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:24:58.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-85742234-5a76-4d8e-b177-b498ecf0af51
STEP: Creating a pod to test consume secrets
Aug 19 02:24:58.620: INFO: Waiting up to 5m0s for pod "pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a" in namespace "secrets-5035" to be "success or failure"
Aug 19 02:24:58.769: INFO: Pod "pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a": Phase="Pending", Reason="", readiness=false. Elapsed: 149.348501ms
Aug 19 02:25:00.812: INFO: Pod "pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192037149s
Aug 19 02:25:02.819: INFO: Pod "pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199235941s
Aug 19 02:25:04.824: INFO: Pod "pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204521414s
STEP: Saw pod success
Aug 19 02:25:04.825: INFO: Pod "pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a" satisfied condition "success or failure"
Aug 19 02:25:04.838: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a container secret-volume-test: 
STEP: delete the pod
Aug 19 02:25:04.921: INFO: Waiting for pod pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a to disappear
Aug 19 02:25:04.927: INFO: Pod pod-secrets-d88211d8-e2b7-4cee-afe9-c770c01aa66a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:25:04.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5035" for this suite.
Aug 19 02:25:12.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:25:13.147: INFO: namespace secrets-5035 deletion completed in 8.211352909s

• [SLOW TEST:14.757 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:25:13.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 02:25:13.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-822'
Aug 19 02:25:19.445: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 02:25:19.445: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Aug 19 02:25:23.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-822'
Aug 19 02:25:24.674: INFO: stderr: ""
Aug 19 02:25:24.674: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:25:24.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-822" for this suite.
Aug 19 02:25:46.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:25:46.808: INFO: namespace kubectl-822 deletion completed in 22.125698344s

• [SLOW TEST:33.655 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:25:46.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8260
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8260
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8260
Aug 19 02:25:46.955: INFO: Found 0 stateful pods, waiting for 1
Aug 19 02:25:56.961: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 19 02:25:56.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 02:25:58.327: INFO: stderr: "I0819 02:25:58.193881     534 log.go:172] (0x29041c0) (0x2904230) Create stream\nI0819 02:25:58.198040     534 log.go:172] (0x29041c0) (0x2904230) Stream added, broadcasting: 1\nI0819 02:25:58.210954     534 log.go:172] (0x29041c0) Reply frame received for 1\nI0819 02:25:58.211797     534 log.go:172] (0x29041c0) (0x2ada000) Create stream\nI0819 02:25:58.211922     534 log.go:172] (0x29041c0) (0x2ada000) Stream added, broadcasting: 3\nI0819 02:25:58.214123     534 log.go:172] (0x29041c0) Reply frame received for 3\nI0819 02:25:58.214512     534 log.go:172] (0x29041c0) (0x28ca310) Create stream\nI0819 02:25:58.214601     534 log.go:172] (0x29041c0) (0x28ca310) Stream added, broadcasting: 5\nI0819 02:25:58.216168     534 log.go:172] (0x29041c0) Reply frame received for 5\nI0819 02:25:58.286669     534 log.go:172] (0x29041c0) Data frame received for 5\nI0819 02:25:58.287063     534 log.go:172] (0x28ca310) (5) Data frame handling\nI0819 02:25:58.287969     534 log.go:172] (0x28ca310) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 02:25:58.308579     534 log.go:172] (0x29041c0) Data frame received for 3\nI0819 02:25:58.308676     534 log.go:172] (0x2ada000) (3) Data frame handling\nI0819 02:25:58.308889     534 log.go:172] (0x2ada000) (3) Data frame sent\nI0819 02:25:58.308977     534 log.go:172] (0x29041c0) Data frame received for 3\nI0819 02:25:58.309052     534 log.go:172] (0x2ada000) (3) Data frame handling\nI0819 02:25:58.309290     534 log.go:172] (0x29041c0) Data frame received for 5\nI0819 02:25:58.309488     534 log.go:172] (0x28ca310) (5) Data frame handling\nI0819 02:25:58.310209     534 log.go:172] (0x29041c0) Data frame received for 1\nI0819 02:25:58.310346     534 log.go:172] (0x2904230) (1) Data frame handling\nI0819 02:25:58.310438     534 log.go:172] (0x2904230) (1) Data frame sent\nI0819 02:25:58.311034     534 log.go:172] (0x29041c0) (0x2904230) Stream removed, broadcasting: 1\nI0819 02:25:58.313750     534 log.go:172] (0x29041c0) Go away received\nI0819 02:25:58.316582     534 log.go:172] (0x29041c0) (0x2904230) Stream removed, broadcasting: 1\nI0819 02:25:58.316992     534 log.go:172] (0x29041c0) (0x2ada000) Stream removed, broadcasting: 3\nI0819 02:25:58.317264     534 log.go:172] (0x29041c0) (0x28ca310) Stream removed, broadcasting: 5\n"
Aug 19 02:25:58.327: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 02:25:58.327: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 02:25:58.333: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 19 02:26:08.442: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 02:26:08.442: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 02:26:08.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999961158s
Aug 19 02:26:09.490: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.969186848s
Aug 19 02:26:10.495: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.962881295s
Aug 19 02:26:11.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.957347192s
Aug 19 02:26:12.534: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.926623571s
Aug 19 02:26:13.539: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.918694446s
Aug 19 02:26:14.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.913459001s
Aug 19 02:26:15.945: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.907635674s
Aug 19 02:26:16.952: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.507250232s
Aug 19 02:26:17.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 500.392841ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8260
Aug 19 02:26:19.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:26:20.359: INFO: stderr: "I0819 02:26:20.262467     554 log.go:172] (0x2992540) (0x2992620) Create stream\nI0819 02:26:20.266349     554 log.go:172] (0x2992540) (0x2992620) Stream added, broadcasting: 1\nI0819 02:26:20.281874     554 log.go:172] (0x2992540) Reply frame received for 1\nI0819 02:26:20.282348     554 log.go:172] (0x2992540) (0x24b08c0) Create stream\nI0819 02:26:20.282421     554 log.go:172] (0x2992540) (0x24b08c0) Stream added, broadcasting: 3\nI0819 02:26:20.283704     554 log.go:172] (0x2992540) Reply frame received for 3\nI0819 02:26:20.284067     554 log.go:172] (0x2992540) (0x2992930) Create stream\nI0819 02:26:20.284151     554 log.go:172] (0x2992540) (0x2992930) Stream added, broadcasting: 5\nI0819 02:26:20.285343     554 log.go:172] (0x2992540) Reply frame received for 5\nI0819 02:26:20.338532     554 log.go:172] (0x2992540) Data frame received for 5\nI0819 02:26:20.338847     554 log.go:172] (0x2992930) (5) Data frame handling\nI0819 02:26:20.339668     554 log.go:172] (0x2992930) (5) Data frame sent\nI0819 02:26:20.339963     554 log.go:172] (0x2992540) Data frame received for 3\nI0819 02:26:20.340060     554 log.go:172] (0x24b08c0) (3) Data frame handling\nI0819 02:26:20.340166     554 log.go:172] (0x24b08c0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 02:26:20.340825     554 log.go:172] (0x2992540) Data frame received for 3\nI0819 02:26:20.340991     554 log.go:172] (0x24b08c0) (3) Data frame handling\nI0819 02:26:20.341384     554 log.go:172] (0x2992540) Data frame received for 5\nI0819 02:26:20.341525     554 log.go:172] (0x2992930) (5) Data frame handling\nI0819 02:26:20.343965     554 log.go:172] (0x2992540) Data frame received for 1\nI0819 02:26:20.344041     554 log.go:172] (0x2992620) (1) Data frame handling\nI0819 02:26:20.344119     554 log.go:172] (0x2992620) (1) Data frame sent\nI0819 02:26:20.344670     554 log.go:172] (0x2992540) (0x2992620) Stream removed, broadcasting: 1\nI0819 02:26:20.347399     554 log.go:172] (0x2992540) Go away received\nI0819 02:26:20.349111     554 log.go:172] (0x2992540) (0x2992620) Stream removed, broadcasting: 1\nI0819 02:26:20.349552     554 log.go:172] (0x2992540) (0x24b08c0) Stream removed, broadcasting: 3\nI0819 02:26:20.349813     554 log.go:172] (0x2992540) (0x2992930) Stream removed, broadcasting: 5\n"
Aug 19 02:26:20.359: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 02:26:20.359: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 02:26:20.437: INFO: Found 1 stateful pods, waiting for 3
Aug 19 02:26:30.544: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 02:26:30.544: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 02:26:30.544: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 19 02:26:41.030: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 02:26:41.030: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 02:26:41.030: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 19 02:26:50.444: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 02:26:50.444: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 02:26:50.444: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 19 02:26:50.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 02:26:51.767: INFO: stderr: "I0819 02:26:51.689883     576 log.go:172] (0x28349a0) (0x24a8690) Create stream\nI0819 02:26:51.692575     576 log.go:172] (0x28349a0) (0x24a8690) Stream added, broadcasting: 1\nI0819 02:26:51.701762     576 log.go:172] (0x28349a0) Reply frame received for 1\nI0819 02:26:51.702660     576 log.go:172] (0x28349a0) (0x26a2000) Create stream\nI0819 02:26:51.702778     576 log.go:172] (0x28349a0) (0x26a2000) Stream added, broadcasting: 3\nI0819 02:26:51.704470     576 log.go:172] (0x28349a0) Reply frame received for 3\nI0819 02:26:51.704656     576 log.go:172] (0x28349a0) (0x24a8850) Create stream\nI0819 02:26:51.704705     576 log.go:172] (0x28349a0) (0x24a8850) Stream added, broadcasting: 5\nI0819 02:26:51.705627     576 log.go:172] (0x28349a0) Reply frame received for 5\nI0819 02:26:51.753415     576 log.go:172] (0x28349a0) Data frame received for 3\nI0819 02:26:51.753587     576 log.go:172] (0x28349a0) Data frame received for 5\nI0819 02:26:51.753687     576 log.go:172] (0x24a8850) (5) Data frame handling\nI0819 02:26:51.753768     576 log.go:172] (0x26a2000) (3) Data frame handling\nI0819 02:26:51.753912     576 log.go:172] (0x28349a0) Data frame received for 1\nI0819 02:26:51.754008     576 log.go:172] (0x24a8690) (1) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 02:26:51.754678     576 log.go:172] (0x24a8690) (1) Data frame sent\nI0819 02:26:51.754838     576 log.go:172] (0x26a2000) (3) Data frame sent\nI0819 02:26:51.754921     576 log.go:172] (0x24a8850) (5) Data frame sent\nI0819 02:26:51.755048     576 log.go:172] (0x28349a0) Data frame received for 3\nI0819 02:26:51.755167     576 log.go:172] (0x26a2000) (3) Data frame handling\nI0819 02:26:51.755285     576 log.go:172] (0x28349a0) Data frame received for 5\nI0819 02:26:51.755398     576 log.go:172] (0x28349a0) (0x24a8690) Stream removed, broadcasting: 1\nI0819 02:26:51.757244     576 log.go:172] (0x24a8850) (5) Data frame handling\nI0819 02:26:51.757955     576 log.go:172] (0x28349a0) Go away received\nI0819 02:26:51.760173     576 log.go:172] (0x28349a0) (0x24a8690) Stream removed, broadcasting: 1\nI0819 02:26:51.760330     576 log.go:172] (0x28349a0) (0x26a2000) Stream removed, broadcasting: 3\nI0819 02:26:51.760454     576 log.go:172] (0x28349a0) (0x24a8850) Stream removed, broadcasting: 5\n"
Aug 19 02:26:51.768: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 02:26:51.768: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 02:26:51.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 02:26:53.160: INFO: stderr: "I0819 02:26:53.009148     597 log.go:172] (0x24164d0) (0x2416540) Create stream\nI0819 02:26:53.012702     597 log.go:172] (0x24164d0) (0x2416540) Stream added, broadcasting: 1\nI0819 02:26:53.028512     597 log.go:172] (0x24164d0) Reply frame received for 1\nI0819 02:26:53.029078     597 log.go:172] (0x24164d0) (0x2952150) Create stream\nI0819 02:26:53.029147     597 log.go:172] (0x24164d0) (0x2952150) Stream added, broadcasting: 3\nI0819 02:26:53.030401     597 log.go:172] (0x24164d0) Reply frame received for 3\nI0819 02:26:53.030637     597 log.go:172] (0x24164d0) (0x27c8000) Create stream\nI0819 02:26:53.030700     597 log.go:172] (0x24164d0) (0x27c8000) Stream added, broadcasting: 5\nI0819 02:26:53.031659     597 log.go:172] (0x24164d0) Reply frame received for 5\nI0819 02:26:53.085799     597 log.go:172] (0x24164d0) Data frame received for 5\nI0819 02:26:53.086101     597 log.go:172] (0x27c8000) (5) Data frame handling\nI0819 02:26:53.086830     597 log.go:172] (0x27c8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 02:26:53.145066     597 log.go:172] (0x24164d0) Data frame received for 3\nI0819 02:26:53.145163     597 log.go:172] (0x2952150) (3) Data frame handling\nI0819 02:26:53.145230     597 log.go:172] (0x2952150) (3) Data frame sent\nI0819 02:26:53.145276     597 log.go:172] (0x24164d0) Data frame received for 3\nI0819 02:26:53.145315     597 log.go:172] (0x2952150) (3) Data frame handling\nI0819 02:26:53.145509     597 log.go:172] (0x24164d0) Data frame received for 5\nI0819 02:26:53.145687     597 log.go:172] (0x27c8000) (5) Data frame handling\nI0819 02:26:53.146176     597 log.go:172] (0x24164d0) Data frame received for 1\nI0819 02:26:53.146239     597 log.go:172] (0x2416540) (1) Data frame handling\nI0819 02:26:53.146297     597 log.go:172] (0x2416540) (1) Data frame sent\nI0819 02:26:53.146870     597 log.go:172] (0x24164d0) (0x2416540) Stream removed, broadcasting: 1\nI0819 02:26:53.148476     597 log.go:172] (0x24164d0) Go away received\nI0819 02:26:53.150758     597 log.go:172] (0x24164d0) (0x2416540) Stream removed, broadcasting: 1\nI0819 02:26:53.151228     597 log.go:172] (0x24164d0) (0x2952150) Stream removed, broadcasting: 3\nI0819 02:26:53.151493     597 log.go:172] (0x24164d0) (0x27c8000) Stream removed, broadcasting: 5\n"
Aug 19 02:26:53.160: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 02:26:53.160: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 02:26:53.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 02:26:54.984: INFO: stderr: "I0819 02:26:54.875658     619 log.go:172] (0x29f5c70) (0x29f5ce0) Create stream\nI0819 02:26:54.877686     619 log.go:172] (0x29f5c70) (0x29f5ce0) Stream added, broadcasting: 1\nI0819 02:26:54.886593     619 log.go:172] (0x29f5c70) Reply frame received for 1\nI0819 02:26:54.887753     619 log.go:172] (0x29f5c70) (0x2944000) Create stream\nI0819 02:26:54.887883     619 log.go:172] (0x29f5c70) (0x2944000) Stream added, broadcasting: 3\nI0819 02:26:54.891536     619 log.go:172] (0x29f5c70) Reply frame received for 3\nI0819 02:26:54.891897     619 log.go:172] (0x29f5c70) (0x25e0000) Create stream\nI0819 02:26:54.891981     619 log.go:172] (0x29f5c70) (0x25e0000) Stream added, broadcasting: 5\nI0819 02:26:54.893259     619 log.go:172] (0x29f5c70) Reply frame received for 5\nI0819 02:26:54.940403     619 log.go:172] (0x29f5c70) Data frame received for 5\nI0819 02:26:54.940661     619 log.go:172] (0x25e0000) (5) Data frame handling\nI0819 02:26:54.941295     619 log.go:172] (0x25e0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 02:26:54.975536     619 log.go:172] (0x29f5c70) Data frame received for 5\nI0819 02:26:54.975630     619 log.go:172] (0x25e0000) (5) Data frame handling\nI0819 02:26:54.975718     619 log.go:172] (0x29f5c70) Data frame received for 3\nI0819 02:26:54.975787     619 log.go:172] (0x2944000) (3) Data frame handling\nI0819 02:26:54.975858     619 log.go:172] (0x2944000) (3) Data frame sent\nI0819 02:26:54.975904     619 log.go:172] (0x29f5c70) Data frame received for 3\nI0819 02:26:54.975943     619 log.go:172] (0x2944000) (3) Data frame handling\nI0819 02:26:54.976301     619 log.go:172] (0x29f5c70) Data frame received for 1\nI0819 02:26:54.976346     619 log.go:172] (0x29f5ce0) (1) Data frame handling\nI0819 02:26:54.976397     619 log.go:172] (0x29f5ce0) (1) Data frame sent\nI0819 02:26:54.976763     619 log.go:172] (0x29f5c70) (0x29f5ce0) Stream removed, broadcasting: 1\nI0819 02:26:54.979120     619 log.go:172] (0x29f5c70) (0x29f5ce0) Stream removed, broadcasting: 1\nI0819 02:26:54.979235     619 log.go:172] (0x29f5c70) (0x2944000) Stream removed, broadcasting: 3\nI0819 02:26:54.979362     619 log.go:172] (0x29f5c70) (0x25e0000) Stream removed, broadcasting: 5\n"
Aug 19 02:26:54.985: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 02:26:54.985: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 02:26:54.985: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 02:26:54.989: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 19 02:27:05.005: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 02:27:05.005: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 02:27:05.005: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 02:27:05.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999988655s
Aug 19 02:27:06.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988496866s
Aug 19 02:27:07.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979229648s
Aug 19 02:27:08.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970850619s
Aug 19 02:27:09.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962064362s
Aug 19 02:27:10.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.844762132s
Aug 19 02:27:11.218: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.837924985s
Aug 19 02:27:12.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.83000526s
Aug 19 02:27:13.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.821059438s
Aug 19 02:27:14.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 806.322907ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8260
Aug 19 02:27:15.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:27:16.588: INFO: stderr: "I0819 02:27:16.512137     639 log.go:172] (0x2ac7030) (0x2ac70a0) Create stream\nI0819 02:27:16.514741     639 log.go:172] (0x2ac7030) (0x2ac70a0) Stream added, broadcasting: 1\nI0819 02:27:16.532871     639 log.go:172] (0x2ac7030) Reply frame received for 1\nI0819 02:27:16.533321     639 log.go:172] (0x2ac7030) (0x24ac930) Create stream\nI0819 02:27:16.533386     639 log.go:172] (0x2ac7030) (0x24ac930) Stream added, broadcasting: 3\nI0819 02:27:16.534437     639 log.go:172] (0x2ac7030) Reply frame received for 3\nI0819 02:27:16.534631     639 log.go:172] (0x2ac7030) (0x281eb60) Create stream\nI0819 02:27:16.534693     639 log.go:172] (0x2ac7030) (0x281eb60) Stream added, broadcasting: 5\nI0819 02:27:16.535512     639 log.go:172] (0x2ac7030) Reply frame received for 5\nI0819 02:27:16.569983     639 log.go:172] (0x2ac7030) Data frame received for 5\nI0819 02:27:16.570335     639 log.go:172] (0x281eb60) (5) Data frame handling\nI0819 02:27:16.571009     639 log.go:172] (0x281eb60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 02:27:16.571659     639 log.go:172] (0x2ac7030) Data frame received for 5\nI0819 02:27:16.571756     639 log.go:172] (0x281eb60) (5) Data frame handling\nI0819 02:27:16.571927     639 log.go:172] (0x2ac7030) Data frame received for 1\nI0819 02:27:16.572025     639 log.go:172] (0x2ac70a0) (1) Data frame handling\nI0819 02:27:16.572144     639 log.go:172] (0x2ac70a0) (1) Data frame sent\nI0819 02:27:16.572237     639 log.go:172] (0x2ac7030) Data frame received for 3\nI0819 02:27:16.572316     639 log.go:172] (0x24ac930) (3) Data frame handling\nI0819 02:27:16.572409     639 log.go:172] (0x24ac930) (3) Data frame sent\nI0819 02:27:16.572489     639 log.go:172] (0x2ac7030) Data frame received for 3\nI0819 02:27:16.572559     639 log.go:172] (0x24ac930) (3) Data frame handling\nI0819 02:27:16.574454     639 log.go:172] (0x2ac7030) (0x2ac70a0) Stream removed, broadcasting: 1\nI0819 02:27:16.577689     639 log.go:172] (0x2ac7030) Go away received\nI0819 02:27:16.580276     639 log.go:172] (0x2ac7030) (0x2ac70a0) Stream removed, broadcasting: 1\nI0819 02:27:16.580539     639 log.go:172] (0x2ac7030) (0x24ac930) Stream removed, broadcasting: 3\nI0819 02:27:16.580877     639 log.go:172] (0x2ac7030) (0x281eb60) Stream removed, broadcasting: 5\n"
Aug 19 02:27:16.589: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 02:27:16.589: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 02:27:16.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:27:17.926: INFO: stderr: "I0819 02:27:17.791730     662 log.go:172] (0x281f180) (0x281f1f0) Create stream\nI0819 02:27:17.796277     662 log.go:172] (0x281f180) (0x281f1f0) Stream added, broadcasting: 1\nI0819 02:27:17.824400     662 log.go:172] (0x281f180) Reply frame received for 1\nI0819 02:27:17.824935     662 log.go:172] (0x281f180) (0x281e000) Create stream\nI0819 02:27:17.825008     662 log.go:172] (0x281f180) (0x281e000) Stream added, broadcasting: 3\nI0819 02:27:17.826091     662 log.go:172] (0x281f180) Reply frame received for 3\nI0819 02:27:17.826300     662 log.go:172] (0x281f180) (0x2832000) Create stream\nI0819 02:27:17.826356     662 log.go:172] (0x281f180) (0x2832000) Stream added, broadcasting: 5\nI0819 02:27:17.827147     662 log.go:172] (0x281f180) Reply frame received for 5\nI0819 02:27:17.908649     662 log.go:172] (0x281f180) Data frame received for 5\nI0819 02:27:17.909068     662 log.go:172] (0x281f180) Data frame received for 3\nI0819 02:27:17.909250     662 log.go:172] (0x281f180) Data frame received for 1\nI0819 02:27:17.909486     662 log.go:172] (0x2832000) (5) Data frame handling\nI0819 02:27:17.909824     662 log.go:172] (0x281f1f0) (1) Data frame handling\nI0819 02:27:17.910104     662 log.go:172] (0x281e000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 02:27:17.912684     662 log.go:172] (0x2832000) (5) Data frame sent\nI0819 02:27:17.912938     662 log.go:172] (0x281f1f0) (1) Data frame sent\nI0819 02:27:17.913306     662 log.go:172] (0x281e000) (3) Data frame sent\nI0819 02:27:17.913430     662 log.go:172] (0x281f180) Data frame received for 3\nI0819 02:27:17.913569     662 log.go:172] (0x281e000) (3) Data frame handling\nI0819 02:27:17.913783     662 log.go:172] (0x281f180) Data frame received for 5\nI0819 02:27:17.913906     662 log.go:172] (0x2832000) (5) Data frame handling\nI0819 02:27:17.914207     662 log.go:172] (0x281f180) (0x281f1f0) Stream removed, broadcasting: 1\nI0819 02:27:17.915102     662 log.go:172] (0x281f180) Go away received\nI0819 02:27:17.917383     662 log.go:172] (0x281f180) (0x281f1f0) Stream removed, broadcasting: 1\nI0819 02:27:17.917573     662 log.go:172] (0x281f180) (0x281e000) Stream removed, broadcasting: 3\nI0819 02:27:17.917916     662 log.go:172] (0x281f180) (0x2832000) Stream removed, broadcasting: 5\n"
Aug 19 02:27:17.927: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 02:27:17.927: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 02:27:17.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:27:19.232: INFO: rc: 1
Aug 19 02:27:19.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    I0819 02:27:19.150183     684 log.go:172] (0x28b57a0) (0x28b5b90) Create stream
I0819 02:27:19.153010     684 log.go:172] (0x28b57a0) (0x28b5b90) Stream added, broadcasting: 1
I0819 02:27:19.163265     684 log.go:172] (0x28b57a0) Reply frame received for 1
I0819 02:27:19.164208     684 log.go:172] (0x28b57a0) (0x2812000) Create stream
I0819 02:27:19.164324     684 log.go:172] (0x28b57a0) (0x2812000) Stream added, broadcasting: 3
I0819 02:27:19.166228     684 log.go:172] (0x28b57a0) Reply frame received for 3
I0819 02:27:19.166452     684 log.go:172] (0x28b57a0) (0x24aa8c0) Create stream
I0819 02:27:19.166526     684 log.go:172] (0x28b57a0) (0x24aa8c0) Stream added, broadcasting: 5
I0819 02:27:19.167527     684 log.go:172] (0x28b57a0) Reply frame received for 5
I0819 02:27:19.217634     684 log.go:172] (0x28b57a0) Data frame received for 1
I0819 02:27:19.218001     684 log.go:172] (0x28b5b90) (1) Data frame handling
I0819 02:27:19.218633     684 log.go:172] (0x28b5b90) (1) Data frame sent
I0819 02:27:19.219715     684 log.go:172] (0x28b57a0) (0x28b5b90) Stream removed, broadcasting: 1
I0819 02:27:19.220584     684 log.go:172] (0x28b57a0) (0x2812000) Stream removed, broadcasting: 3
I0819 02:27:19.220920     684 log.go:172] (0x28b57a0) (0x24aa8c0) Stream removed, broadcasting: 5
I0819 02:27:19.222722     684 log.go:172] (0x28b57a0) Go away received
I0819 02:27:19.224893     684 log.go:172] (0x28b57a0) (0x28b5b90) Stream removed, broadcasting: 1
I0819 02:27:19.225069     684 log.go:172] (0x28b57a0) (0x2812000) Stream removed, broadcasting: 3
I0819 02:27:19.225120     684 log.go:172] (0x28b57a0) (0x24aa8c0) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1e281025ed020cacf9e042ec90a64cfd5eb554733ccbf5b5d79a96bd89531947": task 47947762bedc8b27349576c2c465e618cb3e7b8b3d9cebbd0d149deb66d8940f not found: not found
 []  0x74b21b0 exit status 1   true [0x8710060 0x8710080 0x87100a0] [0x8710060 0x8710080 0x87100a0] [0x8710078 0x8710098] [0x6bbb70 0x6bbb70] 0x85384c0 }:
Command stdout:

stderr:
I0819 02:27:19.150183     684 log.go:172] (0x28b57a0) (0x28b5b90) Create stream
I0819 02:27:19.153010     684 log.go:172] (0x28b57a0) (0x28b5b90) Stream added, broadcasting: 1
I0819 02:27:19.163265     684 log.go:172] (0x28b57a0) Reply frame received for 1
I0819 02:27:19.164208     684 log.go:172] (0x28b57a0) (0x2812000) Create stream
I0819 02:27:19.164324     684 log.go:172] (0x28b57a0) (0x2812000) Stream added, broadcasting: 3
I0819 02:27:19.166228     684 log.go:172] (0x28b57a0) Reply frame received for 3
I0819 02:27:19.166452     684 log.go:172] (0x28b57a0) (0x24aa8c0) Create stream
I0819 02:27:19.166526     684 log.go:172] (0x28b57a0) (0x24aa8c0) Stream added, broadcasting: 5
I0819 02:27:19.167527     684 log.go:172] (0x28b57a0) Reply frame received for 5
I0819 02:27:19.217634     684 log.go:172] (0x28b57a0) Data frame received for 1
I0819 02:27:19.218001     684 log.go:172] (0x28b5b90) (1) Data frame handling
I0819 02:27:19.218633     684 log.go:172] (0x28b5b90) (1) Data frame sent
I0819 02:27:19.219715     684 log.go:172] (0x28b57a0) (0x28b5b90) Stream removed, broadcasting: 1
I0819 02:27:19.220584     684 log.go:172] (0x28b57a0) (0x2812000) Stream removed, broadcasting: 3
I0819 02:27:19.220920     684 log.go:172] (0x28b57a0) (0x24aa8c0) Stream removed, broadcasting: 5
I0819 02:27:19.222722     684 log.go:172] (0x28b57a0) Go away received
I0819 02:27:19.224893     684 log.go:172] (0x28b57a0) (0x28b5b90) Stream removed, broadcasting: 1
I0819 02:27:19.225069     684 log.go:172] (0x28b57a0) (0x2812000) Stream removed, broadcasting: 3
I0819 02:27:19.225120     684 log.go:172] (0x28b57a0) (0x24aa8c0) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1e281025ed020cacf9e042ec90a64cfd5eb554733ccbf5b5d79a96bd89531947": task 47947762bedc8b27349576c2c465e618cb3e7b8b3d9cebbd0d149deb66d8940f not found: not found

error:
exit status 1
Aug 19 02:27:29.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:27:30.322: INFO: rc: 1
Aug 19 02:27:30.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b2270 exit status 1   true [0x87100d0 0x87100f0 0x8710128] [0x87100d0 0x87100f0 0x8710128] [0x87100e8 0x8710108] [0x6bbb70 0x6bbb70] 0x8538ac0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:27:40.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:27:41.380: INFO: rc: 1
Aug 19 02:27:41.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87eed80 exit status 1   true [0x7f44438 0x7f44458 0x7f44478] [0x7f44438 0x7f44458 0x7f44478] [0x7f44450 0x7f44470] [0x6bbb70 0x6bbb70] 0x815fcc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:27:51.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:27:52.524: INFO: rc: 1
Aug 19 02:27:52.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b2330 exit status 1   true [0x8710160 0x8710180 0x87101a0] [0x8710160 0x8710180 0x87101a0] [0x8710178 0x8710198] [0x6bbb70 0x6bbb70] 0x8538ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:28:02.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:28:03.603: INFO: rc: 1
Aug 19 02:28:03.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b23f0 exit status 1   true [0x87101f0 0x8710210 0x8710230] [0x87101f0 0x8710210 0x8710230] [0x8710208 0x8710228] [0x6bbb70 0x6bbb70] 0x8539240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:28:13.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:28:15.357: INFO: rc: 1
Aug 19 02:28:15.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87eee70 exit status 1   true [0x7f44518 0x7f44538 0x7f44558] [0x7f44518 0x7f44538 0x7f44558] [0x7f44530 0x7f44550] [0x6bbb70 0x6bbb70] 0x7948580 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:28:25.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:28:26.585: INFO: rc: 1
Aug 19 02:28:26.585: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87eef30 exit status 1   true [0x7f44590 0x7f445b0 0x7f445d0] [0x7f44590 0x7f445b0 0x7f445d0] [0x7f445a8 0x7f445c8] [0x6bbb70 0x6bbb70] 0x7948bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:28:36.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:28:37.677: INFO: rc: 1
Aug 19 02:28:37.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b2510 exit status 1   true [0x8710338 0x8710358 0x8710378] [0x8710338 0x8710358 0x8710378] [0x8710350 0x8710370] [0x6bbb70 0x6bbb70] 0x8539740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:28:47.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:28:48.792: INFO: rc: 1
Aug 19 02:28:48.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87ef020 exit status 1   true [0x7f44670 0x7f44690 0x7f446b0] [0x7f44670 0x7f44690 0x7f446b0] [0x7f44688 0x7f446a8] [0x6bbb70 0x6bbb70] 0x7949480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:28:58.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:28:59.939: INFO: rc: 1
Aug 19 02:28:59.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x897a870 exit status 1   true [0x7734248 0x7734268 0x7734288] [0x7734248 0x7734268 0x7734288] [0x7734260 0x7734280] [0x6bbb70 0x6bbb70] 0x79210c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:29:09.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:29:11.063: INFO: rc: 1
Aug 19 02:29:11.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b2600 exit status 1   true [0x8710418 0x8710438 0x8710458] [0x8710418 0x8710438 0x8710458] [0x8710430 0x8710450] [0x6bbb70 0x6bbb70] 0x8539a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:29:21.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:29:22.203: INFO: rc: 1
Aug 19 02:29:22.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x882e0c0 exit status 1   true [0x7f44030 0x7f44050 0x7f44070] [0x7f44030 0x7f44050 0x7f44070] [0x7f44048 0x7f44068] [0x6bbb70 0x6bbb70] 0x815e2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:29:32.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:29:33.380: INFO: rc: 1
Aug 19 02:29:33.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x882e210 exit status 1   true [0x7f440a8 0x7f440c8 0x7f440e8] [0x7f440a8 0x7f440c8 0x7f440e8] [0x7f440c0 0x7f440e0] [0x6bbb70 0x6bbb70] 0x815e8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:29:43.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:29:44.484: INFO: rc: 1
Aug 19 02:29:44.485: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b20c0 exit status 1   true [0x8710090 0x87100b0 0x87100d0] [0x8710090 0x87100b0 0x87100d0] [0x87100a8 0x87100c8] [0x6bbb70 0x6bbb70] 0x79486c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:29:54.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:29:55.600: INFO: rc: 1
Aug 19 02:29:55.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87ee0f0 exit status 1   true [0x6c472f8 0x6c475c8 0x6c47e88] [0x6c472f8 0x6c475c8 0x6c47e88] [0x6c47480 0x6c477a0] [0x6bbb70 0x6bbb70] 0x8538480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:30:05.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:30:06.726: INFO: rc: 1
Aug 19 02:30:06.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x882e300 exit status 1   true [0x7f44188 0x7f441a8 0x7f441c8] [0x7f44188 0x7f441a8 0x7f441c8] [0x7f441a0 0x7f441c0] [0x6bbb70 0x6bbb70] 0x815f080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:30:16.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:30:17.861: INFO: rc: 1
Aug 19 02:30:17.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87ee1e0 exit status 1   true [0x77340b8 0x77340f8 0x7734130] [0x77340b8 0x77340f8 0x7734130] [0x77340f0 0x7734110] [0x6bbb70 0x6bbb70] 0x8538ac0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:30:27.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:30:28.982: INFO: rc: 1
Aug 19 02:30:28.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x74b21e0 exit status 1   true [0x8710188 0x87101a8 0x87101c8] [0x8710188 0x87101a8 0x87101c8] [0x87101a0 0x87101c0] [0x6bbb70 0x6bbb70] 0x7948f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:30:38.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:30:40.329: INFO: rc: 1
Aug 19 02:30:40.330: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87ee2a0 exit status 1   true [0x7734188 0x77341a8 0x77341c8] [0x7734188 0x77341a8 0x77341c8] [0x77341a0 0x77341c0] [0x6bbb70 0x6bbb70] 0x8538ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:30:50.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:30:51.483: INFO: rc: 1
Aug 19 02:30:51.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87ee360 exit status 1   true [0x7734200 0x7734220 0x7734240] [0x7734200 0x7734220 0x7734240] [0x7734218 0x7734238] [0x6bbb70 0x6bbb70] 0x8539240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:31:01.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:31:02.565: INFO: rc: 1
Aug 19 02:31:02.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x882e3f0 exit status 1   true [0x7f44270 0x7f44290 0x7f442b0] [0x7f44270 0x7f44290 0x7f442b0] [0x7f44288 0x7f442a8] [0x6bbb70 0x6bbb70] 0x815f7c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:31:12.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:31:13.702: INFO: rc: 1
Aug 19 02:31:13.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x897a0f0 exit status 1   true [0x7ee60f8 0x7ee6118 0x7ee6138] [0x7ee60f8 0x7ee6118 0x7ee6138] [0x7ee6110 0x7ee6130] [0x6bbb70 0x6bbb70] 0x7920600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:31:23.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:31:24.840: INFO: rc: 1
Aug 19 02:31:24.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x897a090 exit status 1   true [0x6c46618 0x6c46af0 0x6c46cd0] [0x6c46618 0x6c46af0 0x6c46cd0] [0x6c46948 0x6c46cb8] [0x6bbb70 0x6bbb70] 0x7920600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:31:34.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:31:36.004: INFO: rc: 1
Aug 19 02:31:36.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x897a1b0 exit status 1   true [0x7ee6028 0x7ee6048 0x7ee6068] [0x7ee6028 0x7ee6048 0x7ee6068] [0x7ee6040 0x7ee6060] [0x6bbb70 0x6bbb70] 0x7920e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:31:46.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:31:47.158: INFO: rc: 1
Aug 19 02:31:47.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x882e150 exit status 1   true [0x7f44090 0x7f440b0 0x7f440d0] [0x7f44090 0x7f440b0 0x7f440d0] [0x7f440a8 0x7f440c8] [0x6bbb70 0x6bbb70] 0x815e2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:31:57.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:31:58.305: INFO: rc: 1
Aug 19 02:31:58.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x897a270 exit status 1   true [0x7ee60a0 0x7ee60c0 0x7ee60e0] [0x7ee60a0 0x7ee60c0 0x7ee60e0] [0x7ee60b8 0x7ee60d8] [0x6bbb70 0x6bbb70] 0x7921740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:32:08.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:32:09.453: INFO: rc: 1
Aug 19 02:32:09.454: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x87ee0c0 exit status 1   true [0x7734028 0x7734048 0x7734078] [0x7734028 0x7734048 0x7734078] [0x7734040 0x7734070] [0x6bbb70 0x6bbb70] 0x8538480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 02:32:19.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8260 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 02:32:20.593: INFO: rc: 1
Aug 19 02:32:20.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Aug 19 02:32:20.594: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 02:32:20.617: INFO: Deleting all statefulset in ns statefulset-8260
Aug 19 02:32:20.623: INFO: Scaling statefulset ss to 0
Aug 19 02:32:20.637: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 02:32:20.641: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:32:20.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8260" for this suite.
Aug 19 02:32:28.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:32:28.810: INFO: namespace statefulset-8260 deletion completed in 8.147219164s

• [SLOW TEST:401.998 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:32:28.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:32:28.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7171" for this suite.
Aug 19 02:32:50.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:32:51.079: INFO: namespace pods-7171 deletion completed in 22.174275068s

• [SLOW TEST:22.264 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:32:51.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 02:32:59.238: INFO: DNS probes using dns-8766/dns-test-359a3802-9e1f-4ef6-b0c9-554165198c00 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:32:59.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8766" for this suite.
Aug 19 02:33:05.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:33:05.458: INFO: namespace dns-8766 deletion completed in 6.173729391s

• [SLOW TEST:14.377 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:33:05.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-408a16b8-5d66-4e45-b288-aa30d770625b
STEP: Creating secret with name s-test-opt-upd-b1b8ad93-c9ff-42f6-8eea-47ed2ecd5a0d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-408a16b8-5d66-4e45-b288-aa30d770625b
STEP: Updating secret s-test-opt-upd-b1b8ad93-c9ff-42f6-8eea-47ed2ecd5a0d
STEP: Creating secret with name s-test-opt-create-5a3abb38-897e-4a7a-9dc8-2e7315632e7d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:34:19.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9998" for this suite.
Aug 19 02:34:43.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:34:44.163: INFO: namespace projected-9998 deletion completed in 24.204008007s

• [SLOW TEST:98.702 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:34:44.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:34:51.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3017" for this suite.
Aug 19 02:35:15.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:35:15.803: INFO: namespace replication-controller-3017 deletion completed in 24.164463286s

• [SLOW TEST:31.638 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:35:15.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 02:35:16.578: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:35:25.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5157" for this suite.
Aug 19 02:35:31.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:35:31.226: INFO: namespace init-container-5157 deletion completed in 6.175951131s

• [SLOW TEST:15.421 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:35:31.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:35:31.310: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 02:35:37.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2969'
Aug 19 02:35:41.671: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 02:35:41.671: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 19 02:35:41.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2969'
Aug 19 02:35:42.821: INFO: stderr: ""
Aug 19 02:35:42.821: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:35:42.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2969" for this suite.
Aug 19 02:36:04.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:36:05.031: INFO: namespace kubectl-2969 deletion completed in 22.203183737s

• [SLOW TEST:27.446 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:36:05.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-dd4b042d-d8b5-410e-8255-4050854f7d36 in namespace container-probe-528
Aug 19 02:36:09.182: INFO: Started pod liveness-dd4b042d-d8b5-410e-8255-4050854f7d36 in namespace container-probe-528
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 02:36:09.187: INFO: Initial restart count of pod liveness-dd4b042d-d8b5-410e-8255-4050854f7d36 is 0
Aug 19 02:36:35.302: INFO: Restart count of pod container-probe-528/liveness-dd4b042d-d8b5-410e-8255-4050854f7d36 is now 1 (26.115362334s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:36:35.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-528" for this suite.
Aug 19 02:36:42.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:36:42.174: INFO: namespace container-probe-528 deletion completed in 6.575627384s

• [SLOW TEST:37.140 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:36:42.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Aug 19 02:36:48.296: INFO: Pod pod-hostip-f48d0d6d-77a7-4664-8754-305236032d58 has hostIP: 172.18.0.5
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:36:48.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4141" for this suite.
Aug 19 02:37:10.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:37:10.616: INFO: namespace pods-4141 deletion completed in 22.312087037s

• [SLOW TEST:28.441 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:37:10.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e75fa65d-08da-40a5-bf11-ebed1b9565db
STEP: Creating a pod to test consume configMaps
Aug 19 02:37:11.739: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03" in namespace "projected-8732" to be "success or failure"
Aug 19 02:37:11.779: INFO: Pod "pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03": Phase="Pending", Reason="", readiness=false. Elapsed: 39.291002ms
Aug 19 02:37:13.915: INFO: Pod "pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175337904s
Aug 19 02:37:15.969: INFO: Pod "pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229365491s
Aug 19 02:37:17.979: INFO: Pod "pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239880342s
STEP: Saw pod success
Aug 19 02:37:17.980: INFO: Pod "pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03" satisfied condition "success or failure"
Aug 19 02:37:18.046: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 02:37:18.155: INFO: Waiting for pod pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03 to disappear
Aug 19 02:37:18.161: INFO: Pod pod-projected-configmaps-c6c5418c-431a-48dd-a8f2-aa7be34e2a03 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:37:18.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8732" for this suite.
Aug 19 02:37:24.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:37:24.613: INFO: namespace projected-8732 deletion completed in 6.440329924s

• [SLOW TEST:13.996 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:37:24.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4dcd8445-3a37-45ed-adf2-d9b87037b930
STEP: Creating a pod to test consume configMaps
Aug 19 02:37:24.785: INFO: Waiting up to 5m0s for pod "pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203" in namespace "configmap-1584" to be "success or failure"
Aug 19 02:37:24.790: INFO: Pod "pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387908ms
Aug 19 02:37:27.116: INFO: Pod "pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330895841s
Aug 19 02:37:29.123: INFO: Pod "pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33767235s
STEP: Saw pod success
Aug 19 02:37:29.123: INFO: Pod "pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203" satisfied condition "success or failure"
Aug 19 02:37:29.128: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203 container configmap-volume-test: 
STEP: delete the pod
Aug 19 02:37:29.428: INFO: Waiting for pod pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203 to disappear
Aug 19 02:37:29.612: INFO: Pod pod-configmaps-50dae1d8-d90c-4d22-9a47-7b636a045203 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:37:29.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1584" for this suite.
Aug 19 02:37:35.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:37:35.795: INFO: namespace configmap-1584 deletion completed in 6.128555878s

• [SLOW TEST:11.181 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:37:35.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 19 02:37:48.253: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 02:37:48.369: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 02:37:50.369: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 02:37:50.422: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 02:37:52.369: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 02:37:52.539: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 02:37:54.369: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 02:37:54.373: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:37:54.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4696" for this suite.
Aug 19 02:38:16.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:38:16.547: INFO: namespace container-lifecycle-hook-4696 deletion completed in 22.161707185s

• [SLOW TEST:40.752 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:38:16.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 02:38:16.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6459'
Aug 19 02:38:18.570: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 02:38:18.570: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Aug 19 02:38:20.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6459'
Aug 19 02:38:21.879: INFO: stderr: ""
Aug 19 02:38:21.880: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:38:21.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6459" for this suite.
Aug 19 02:38:28.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:38:28.186: INFO: namespace kubectl-6459 deletion completed in 6.298139753s

• [SLOW TEST:11.638 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:38:28.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:38:28.976: INFO: Creating deployment "test-recreate-deployment"
Aug 19 02:38:28.981: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 19 02:38:29.150: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 19 02:38:31.162: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 19 02:38:31.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733401509, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733401509, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733401509, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733401508, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 02:38:33.171: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 19 02:38:33.179: INFO: Updating deployment test-recreate-deployment
Aug 19 02:38:33.180: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 02:38:35.196: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-3976,SelfLink:/apis/apps/v1/namespaces/deployment-3976/deployments/test-recreate-deployment,UID:21660177-48ff-4a5e-be68-19997794faa6,ResourceVersion:962844,Generation:2,CreationTimestamp:2020-08-19 02:38:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-19 02:38:34 +0000 UTC 2020-08-19 02:38:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-19 02:38:35 +0000 UTC 2020-08-19 02:38:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug 19 02:38:35.202: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-3976,SelfLink:/apis/apps/v1/namespaces/deployment-3976/replicasets/test-recreate-deployment-5c8c9cc69d,UID:b7644870-b30f-44f1-bf12-055bf093cb2b,ResourceVersion:962843,Generation:1,CreationTimestamp:2020-08-19 02:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 21660177-48ff-4a5e-be68-19997794faa6 0x9017d47 0x9017d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 02:38:35.202: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 19 02:38:35.203: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-3976,SelfLink:/apis/apps/v1/namespaces/deployment-3976/replicasets/test-recreate-deployment-6df85df6b9,UID:6f5d8015-3302-4f8a-bd86-70a1ae34b40b,ResourceVersion:962829,Generation:2,CreationTimestamp:2020-08-19 02:38:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 21660177-48ff-4a5e-be68-19997794faa6 0x9017e17 0x9017e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 02:38:35.207: INFO: Pod "test-recreate-deployment-5c8c9cc69d-9cc5g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-9cc5g,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-3976,SelfLink:/api/v1/namespaces/deployment-3976/pods/test-recreate-deployment-5c8c9cc69d-9cc5g,UID:42ef1133-0f73-4e1e-81e4-b2765f03d8dc,ResourceVersion:962842,Generation:0,CreationTimestamp:2020-08-19 02:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d b7644870-b30f-44f1-bf12-055bf093cb2b 0x8e20767 0x8e20768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jxvw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jxvw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jxvw8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8e207e0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8e20800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:38:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:38:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:38:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:38:34 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 02:38:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:38:35.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3976" for this suite.
Aug 19 02:38:43.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:38:44.108: INFO: namespace deployment-3976 deletion completed in 8.895940358s

• [SLOW TEST:15.922 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:38:44.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:38:44.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 19 02:38:46.169: INFO: stderr: ""
Aug 19 02:38:46.169: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:38:46.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4047" for this suite.
Aug 19 02:38:52.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:38:52.752: INFO: namespace kubectl-4047 deletion completed in 6.571397186s

• [SLOW TEST:8.642 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:38:52.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-ldr6
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 02:38:53.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ldr6" in namespace "subpath-5268" to be "success or failure"
Aug 19 02:38:53.111: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.506523ms
Aug 19 02:38:55.130: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035660545s
Aug 19 02:38:57.137: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042775885s
Aug 19 02:38:59.154: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 6.059696912s
Aug 19 02:39:01.159: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 8.064803255s
Aug 19 02:39:03.191: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 10.096249873s
Aug 19 02:39:05.256: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 12.161524764s
Aug 19 02:39:07.262: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 14.167667693s
Aug 19 02:39:09.346: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 16.251902991s
Aug 19 02:39:11.351: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 18.257148358s
Aug 19 02:39:13.357: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 20.262824823s
Aug 19 02:39:15.363: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 22.268281104s
Aug 19 02:39:17.369: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Running", Reason="", readiness=true. Elapsed: 24.274778842s
Aug 19 02:39:19.533: INFO: Pod "pod-subpath-test-secret-ldr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.438468806s
STEP: Saw pod success
Aug 19 02:39:19.533: INFO: Pod "pod-subpath-test-secret-ldr6" satisfied condition "success or failure"
Aug 19 02:39:19.538: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-ldr6 container test-container-subpath-secret-ldr6: 
STEP: delete the pod
Aug 19 02:39:19.804: INFO: Waiting for pod pod-subpath-test-secret-ldr6 to disappear
Aug 19 02:39:19.828: INFO: Pod pod-subpath-test-secret-ldr6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-ldr6
Aug 19 02:39:19.828: INFO: Deleting pod "pod-subpath-test-secret-ldr6" in namespace "subpath-5268"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:39:19.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5268" for this suite.
Aug 19 02:39:25.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:39:26.012: INFO: namespace subpath-5268 deletion completed in 6.170906877s

• [SLOW TEST:33.259 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:39:26.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:40:26.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1336" for this suite.
Aug 19 02:40:48.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:40:48.266: INFO: namespace container-probe-1336 deletion completed in 22.155192078s

• [SLOW TEST:82.252 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:40:48.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-89
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 02:40:48.570: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 02:41:10.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.64:8080/dial?request=hostName&protocol=udp&host=10.244.1.231&port=8081&tries=1'] Namespace:pod-network-test-89 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 02:41:10.861: INFO: >>> kubeConfig: /root/.kube/config
I0819 02:41:10.975668       7 log.go:172] (0x805c9a0) (0x805ca80) Create stream
I0819 02:41:10.975879       7 log.go:172] (0x805c9a0) (0x805ca80) Stream added, broadcasting: 1
I0819 02:41:10.980507       7 log.go:172] (0x805c9a0) Reply frame received for 1
I0819 02:41:10.980865       7 log.go:172] (0x805c9a0) (0x805cb60) Create stream
I0819 02:41:10.981021       7 log.go:172] (0x805c9a0) (0x805cb60) Stream added, broadcasting: 3
I0819 02:41:10.983298       7 log.go:172] (0x805c9a0) Reply frame received for 3
I0819 02:41:10.983479       7 log.go:172] (0x805c9a0) (0x82e58f0) Create stream
I0819 02:41:10.983571       7 log.go:172] (0x805c9a0) (0x82e58f0) Stream added, broadcasting: 5
I0819 02:41:10.985399       7 log.go:172] (0x805c9a0) Reply frame received for 5
I0819 02:41:11.066011       7 log.go:172] (0x805c9a0) Data frame received for 3
I0819 02:41:11.066296       7 log.go:172] (0x805cb60) (3) Data frame handling
I0819 02:41:11.066513       7 log.go:172] (0x805cb60) (3) Data frame sent
I0819 02:41:11.066670       7 log.go:172] (0x805c9a0) Data frame received for 3
I0819 02:41:11.066815       7 log.go:172] (0x805c9a0) Data frame received for 5
I0819 02:41:11.067111       7 log.go:172] (0x82e58f0) (5) Data frame handling
I0819 02:41:11.067348       7 log.go:172] (0x805cb60) (3) Data frame handling
I0819 02:41:11.068388       7 log.go:172] (0x805c9a0) Data frame received for 1
I0819 02:41:11.068460       7 log.go:172] (0x805ca80) (1) Data frame handling
I0819 02:41:11.068546       7 log.go:172] (0x805ca80) (1) Data frame sent
I0819 02:41:11.068637       7 log.go:172] (0x805c9a0) (0x805ca80) Stream removed, broadcasting: 1
I0819 02:41:11.069021       7 log.go:172] (0x805c9a0) Go away received
I0819 02:41:11.069264       7 log.go:172] (0x805c9a0) (0x805ca80) Stream removed, broadcasting: 1
I0819 02:41:11.069437       7 log.go:172] (0x805c9a0) (0x805cb60) Stream removed, broadcasting: 3
I0819 02:41:11.069599       7 log.go:172] (0x805c9a0) (0x82e58f0) Stream removed, broadcasting: 5
Aug 19 02:41:11.070: INFO: Waiting for endpoints: map[]
Aug 19 02:41:11.076: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.64:8080/dial?request=hostName&protocol=udp&host=10.244.2.63&port=8081&tries=1'] Namespace:pod-network-test-89 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 02:41:11.077: INFO: >>> kubeConfig: /root/.kube/config
I0819 02:41:11.183522       7 log.go:172] (0x8a02310) (0x8a023f0) Create stream
I0819 02:41:11.183822       7 log.go:172] (0x8a02310) (0x8a023f0) Stream added, broadcasting: 1
I0819 02:41:11.189552       7 log.go:172] (0x8a02310) Reply frame received for 1
I0819 02:41:11.189756       7 log.go:172] (0x8a02310) (0x805cc40) Create stream
I0819 02:41:11.189855       7 log.go:172] (0x8a02310) (0x805cc40) Stream added, broadcasting: 3
I0819 02:41:11.191644       7 log.go:172] (0x8a02310) Reply frame received for 3
I0819 02:41:11.191759       7 log.go:172] (0x8a02310) (0x8a024d0) Create stream
I0819 02:41:11.191821       7 log.go:172] (0x8a02310) (0x8a024d0) Stream added, broadcasting: 5
I0819 02:41:11.193336       7 log.go:172] (0x8a02310) Reply frame received for 5
I0819 02:41:11.260140       7 log.go:172] (0x8a02310) Data frame received for 3
I0819 02:41:11.260299       7 log.go:172] (0x805cc40) (3) Data frame handling
I0819 02:41:11.260478       7 log.go:172] (0x805cc40) (3) Data frame sent
I0819 02:41:11.261056       7 log.go:172] (0x8a02310) Data frame received for 5
I0819 02:41:11.261159       7 log.go:172] (0x8a024d0) (5) Data frame handling
I0819 02:41:11.261297       7 log.go:172] (0x8a02310) Data frame received for 3
I0819 02:41:11.261446       7 log.go:172] (0x805cc40) (3) Data frame handling
I0819 02:41:11.262415       7 log.go:172] (0x8a02310) Data frame received for 1
I0819 02:41:11.262537       7 log.go:172] (0x8a023f0) (1) Data frame handling
I0819 02:41:11.262605       7 log.go:172] (0x8a023f0) (1) Data frame sent
I0819 02:41:11.262684       7 log.go:172] (0x8a02310) (0x8a023f0) Stream removed, broadcasting: 1
I0819 02:41:11.262776       7 log.go:172] (0x8a02310) Go away received
I0819 02:41:11.263039       7 log.go:172] (0x8a02310) (0x8a023f0) Stream removed, broadcasting: 1
I0819 02:41:11.263160       7 log.go:172] (0x8a02310) (0x805cc40) Stream removed, broadcasting: 3
I0819 02:41:11.263258       7 log.go:172] (0x8a02310) (0x8a024d0) Stream removed, broadcasting: 5
Aug 19 02:41:11.263: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:41:11.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-89" for this suite.
Aug 19 02:41:35.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:41:35.417: INFO: namespace pod-network-test-89 deletion completed in 24.144250472s

• [SLOW TEST:47.150 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:41:35.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:41:35.474: INFO: Creating ReplicaSet my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222
Aug 19 02:41:35.491: INFO: Pod name my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222: Found 0 pods out of 1
Aug 19 02:41:40.499: INFO: Pod name my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222: Found 1 pods out of 1
Aug 19 02:41:40.500: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222" is running
Aug 19 02:41:40.506: INFO: Pod "my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222-5ctvp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:41:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:41:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:41:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 02:41:35 +0000 UTC Reason: Message:}])
Aug 19 02:41:40.506: INFO: Trying to dial the pod
Aug 19 02:41:45.526: INFO: Controller my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222: Got expected result from replica 1 [my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222-5ctvp]: "my-hostname-basic-b4f98365-437e-4e74-9d23-226149293222-5ctvp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:41:45.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6516" for this suite.
Aug 19 02:41:51.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:41:51.688: INFO: namespace replicaset-6516 deletion completed in 6.151275743s

• [SLOW TEST:16.267 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:41:51.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0819 02:42:31.922361       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 02:42:31.922: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:42:31.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3097" for this suite.
Aug 19 02:42:39.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:42:40.243: INFO: namespace gc-3097 deletion completed in 8.31284878s

• [SLOW TEST:48.554 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:42:40.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b6f54569-e8fd-46da-8f4b-7f060e0cd2a5
STEP: Creating a pod to test consume secrets
Aug 19 02:42:41.156: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957" in namespace "projected-9284" to be "success or failure"
Aug 19 02:42:41.164: INFO: Pod "pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957": Phase="Pending", Reason="", readiness=false. Elapsed: 7.740519ms
Aug 19 02:42:43.182: INFO: Pod "pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025837796s
Aug 19 02:42:45.189: INFO: Pod "pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032872409s
Aug 19 02:42:47.196: INFO: Pod "pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040609394s
STEP: Saw pod success
Aug 19 02:42:47.197: INFO: Pod "pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957" satisfied condition "success or failure"
Aug 19 02:42:47.201: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 02:42:47.225: INFO: Waiting for pod pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957 to disappear
Aug 19 02:42:47.229: INFO: Pod pod-projected-secrets-838c2221-80f3-4713-9793-8430f74f4957 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:42:47.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9284" for this suite.
Aug 19 02:42:53.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:42:53.437: INFO: namespace projected-9284 deletion completed in 6.197682703s

• [SLOW TEST:13.193 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:42:53.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 02:42:53.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-374'
Aug 19 02:42:54.719: INFO: stderr: ""
Aug 19 02:42:54.719: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug 19 02:42:59.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-374 -o json'
Aug 19 02:43:00.894: INFO: stderr: ""
Aug 19 02:43:00.894: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-19T02:42:54Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-374\",\n        \"resourceVersion\": \"963761\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-374/pods/e2e-test-nginx-pod\",\n        \"uid\": \"c5d35793-91f0-472f-ba1f-903a652182a4\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-2zfpm\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-2zfpm\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-2zfpm\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T02:42:54Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T02:42:57Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T02:42:57Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T02:42:54Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://62ff7e5ed9b73fffac4c719685c88f9302c502bee9a00ac9c2ffce806596cffc\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-19T02:42:56Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.9\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.239\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-19T02:42:54Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 19 02:43:00.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-374'
Aug 19 02:43:02.558: INFO: stderr: ""
Aug 19 02:43:02.558: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Aug 19 02:43:02.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-374'
Aug 19 02:43:07.596: INFO: stderr: ""
Aug 19 02:43:07.596: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:43:07.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-374" for this suite.
Aug 19 02:43:13.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:43:14.031: INFO: namespace kubectl-374 deletion completed in 6.388950179s

• [SLOW TEST:20.592 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:43:14.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-1e524bec-32f2-44d8-9a3f-81a04418739a
STEP: Creating configMap with name cm-test-opt-upd-3b453382-5148-4b9a-860e-577474b6577f
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-1e524bec-32f2-44d8-9a3f-81a04418739a
STEP: Updating configmap cm-test-opt-upd-3b453382-5148-4b9a-860e-577474b6577f
STEP: Creating configMap with name cm-test-opt-create-56884b4a-6c1c-42be-bb82-e90c86086779
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:44:40.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-628" for this suite.
Aug 19 02:45:04.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:45:04.998: INFO: namespace projected-628 deletion completed in 24.121318694s

• [SLOW TEST:110.965 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:45:05.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f0470762-0052-42e8-b51d-5147dddcd370
STEP: Creating a pod to test consume secrets
Aug 19 02:45:05.074: INFO: Waiting up to 5m0s for pod "pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba" in namespace "secrets-1917" to be "success or failure"
Aug 19 02:45:05.085: INFO: Pod "pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.006743ms
Aug 19 02:45:07.091: INFO: Pod "pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016649498s
Aug 19 02:45:09.095: INFO: Pod "pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020932881s
STEP: Saw pod success
Aug 19 02:45:09.095: INFO: Pod "pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba" satisfied condition "success or failure"
Aug 19 02:45:09.099: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba container secret-volume-test: 
STEP: delete the pod
Aug 19 02:45:09.115: INFO: Waiting for pod pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba to disappear
Aug 19 02:45:09.120: INFO: Pod pod-secrets-53fce5a0-abc3-4462-85ef-b58e67ad66ba no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:45:09.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1917" for this suite.
Aug 19 02:45:15.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:45:15.347: INFO: namespace secrets-1917 deletion completed in 6.216771052s

• [SLOW TEST:10.345 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:45:15.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-2522
I0819 02:45:15.504663       7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2522, replica count: 1
I0819 02:45:16.557366       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 02:45:17.558576       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 02:45:18.559649       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 02:45:19.560251       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 02:45:20.561564       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 19 02:45:20.741: INFO: Created: latency-svc-ddjtj
Aug 19 02:45:20.816: INFO: Got endpoints: latency-svc-ddjtj [152.227886ms]
Aug 19 02:45:20.880: INFO: Created: latency-svc-c87s5
Aug 19 02:45:20.911: INFO: Got endpoints: latency-svc-c87s5 [94.2028ms]
Aug 19 02:45:20.911: INFO: Created: latency-svc-bgm2t
Aug 19 02:45:20.922: INFO: Got endpoints: latency-svc-bgm2t [105.392117ms]
Aug 19 02:45:20.941: INFO: Created: latency-svc-kqmd2
Aug 19 02:45:20.952: INFO: Got endpoints: latency-svc-kqmd2 [135.413161ms]
Aug 19 02:45:21.022: INFO: Created: latency-svc-cthzp
Aug 19 02:45:21.024: INFO: Got endpoints: latency-svc-cthzp [206.826532ms]
Aug 19 02:45:21.050: INFO: Created: latency-svc-jjpsd
Aug 19 02:45:21.061: INFO: Got endpoints: latency-svc-jjpsd [243.6951ms]
Aug 19 02:45:21.079: INFO: Created: latency-svc-9xrv6
Aug 19 02:45:21.091: INFO: Got endpoints: latency-svc-9xrv6 [274.022033ms]
Aug 19 02:45:21.115: INFO: Created: latency-svc-nxfr8
Aug 19 02:45:21.190: INFO: Got endpoints: latency-svc-nxfr8 [372.62023ms]
Aug 19 02:45:21.197: INFO: Created: latency-svc-lv2s4
Aug 19 02:45:21.217: INFO: Got endpoints: latency-svc-lv2s4 [400.184056ms]
Aug 19 02:45:21.241: INFO: Created: latency-svc-6xrw5
Aug 19 02:45:21.255: INFO: Got endpoints: latency-svc-6xrw5 [437.811993ms]
Aug 19 02:45:21.283: INFO: Created: latency-svc-5mwqg
Aug 19 02:45:21.351: INFO: Got endpoints: latency-svc-5mwqg [533.475896ms]
Aug 19 02:45:21.385: INFO: Created: latency-svc-kwss7
Aug 19 02:45:21.429: INFO: Got endpoints: latency-svc-kwss7 [611.593493ms]
Aug 19 02:45:21.566: INFO: Created: latency-svc-tqd5f
Aug 19 02:45:21.612: INFO: Got endpoints: latency-svc-tqd5f [794.875784ms]
Aug 19 02:45:21.740: INFO: Created: latency-svc-qw2xw
Aug 19 02:45:21.776: INFO: Created: latency-svc-lg4tb
Aug 19 02:45:21.781: INFO: Got endpoints: latency-svc-qw2xw [963.074906ms]
Aug 19 02:45:21.795: INFO: Got endpoints: latency-svc-lg4tb [977.182446ms]
Aug 19 02:45:21.818: INFO: Created: latency-svc-29mjg
Aug 19 02:45:21.831: INFO: Got endpoints: latency-svc-29mjg [1.010591403s]
Aug 19 02:45:21.885: INFO: Created: latency-svc-7c82w
Aug 19 02:45:21.907: INFO: Got endpoints: latency-svc-7c82w [995.333316ms]
Aug 19 02:45:21.968: INFO: Created: latency-svc-z8g6p
Aug 19 02:45:22.034: INFO: Got endpoints: latency-svc-z8g6p [1.110972186s]
Aug 19 02:45:22.051: INFO: Created: latency-svc-cth66
Aug 19 02:45:22.066: INFO: Got endpoints: latency-svc-cth66 [1.113243638s]
Aug 19 02:45:22.089: INFO: Created: latency-svc-ncs6z
Aug 19 02:45:22.102: INFO: Got endpoints: latency-svc-ncs6z [1.078061022s]
Aug 19 02:45:22.124: INFO: Created: latency-svc-4g7bv
Aug 19 02:45:22.195: INFO: Got endpoints: latency-svc-4g7bv [1.134468643s]
Aug 19 02:45:22.197: INFO: Created: latency-svc-79sg7
Aug 19 02:45:22.200: INFO: Got endpoints: latency-svc-79sg7 [1.108727568s]
Aug 19 02:45:22.256: INFO: Created: latency-svc-5lpdg
Aug 19 02:45:22.267: INFO: Got endpoints: latency-svc-5lpdg [1.076714429s]
Aug 19 02:45:22.286: INFO: Created: latency-svc-fgxvq
Aug 19 02:45:22.328: INFO: Got endpoints: latency-svc-fgxvq [1.110316742s]
Aug 19 02:45:22.351: INFO: Created: latency-svc-qlg4c
Aug 19 02:45:22.381: INFO: Got endpoints: latency-svc-qlg4c [1.126376074s]
Aug 19 02:45:22.420: INFO: Created: latency-svc-kj5zt
Aug 19 02:45:22.483: INFO: Got endpoints: latency-svc-kj5zt [1.131994717s]
Aug 19 02:45:22.503: INFO: Created: latency-svc-td7wb
Aug 19 02:45:22.512: INFO: Got endpoints: latency-svc-td7wb [1.082763363s]
Aug 19 02:45:22.532: INFO: Created: latency-svc-wllrx
Aug 19 02:45:22.542: INFO: Got endpoints: latency-svc-wllrx [929.819722ms]
Aug 19 02:45:22.567: INFO: Created: latency-svc-vxrb2
Aug 19 02:45:22.638: INFO: Got endpoints: latency-svc-vxrb2 [856.907718ms]
Aug 19 02:45:22.640: INFO: Created: latency-svc-wmk4n
Aug 19 02:45:22.651: INFO: Got endpoints: latency-svc-wmk4n [855.163101ms]
Aug 19 02:45:22.694: INFO: Created: latency-svc-dppt8
Aug 19 02:45:22.705: INFO: Got endpoints: latency-svc-dppt8 [873.487015ms]
Aug 19 02:45:22.776: INFO: Created: latency-svc-mks6d
Aug 19 02:45:22.802: INFO: Got endpoints: latency-svc-mks6d [894.604106ms]
Aug 19 02:45:22.802: INFO: Created: latency-svc-nrmvs
Aug 19 02:45:22.815: INFO: Got endpoints: latency-svc-nrmvs [780.669514ms]
Aug 19 02:45:22.837: INFO: Created: latency-svc-cn5tp
Aug 19 02:45:22.863: INFO: Got endpoints: latency-svc-cn5tp [797.314893ms]
Aug 19 02:45:22.933: INFO: Created: latency-svc-bf2tv
Aug 19 02:45:22.935: INFO: Got endpoints: latency-svc-bf2tv [832.69062ms]
Aug 19 02:45:22.988: INFO: Created: latency-svc-f7czf
Aug 19 02:45:23.000: INFO: Got endpoints: latency-svc-f7czf [804.888669ms]
Aug 19 02:45:23.079: INFO: Created: latency-svc-jvbsm
Aug 19 02:45:23.079: INFO: Got endpoints: latency-svc-jvbsm [879.431843ms]
Aug 19 02:45:23.107: INFO: Created: latency-svc-6hgv9
Aug 19 02:45:23.121: INFO: Got endpoints: latency-svc-6hgv9 [854.409802ms]
Aug 19 02:45:23.138: INFO: Created: latency-svc-lbdrj
Aug 19 02:45:23.153: INFO: Got endpoints: latency-svc-lbdrj [825.119602ms]
Aug 19 02:45:23.174: INFO: Created: latency-svc-gcl8d
Aug 19 02:45:23.206: INFO: Got endpoints: latency-svc-gcl8d [824.965179ms]
Aug 19 02:45:23.228: INFO: Created: latency-svc-jpsz6
Aug 19 02:45:23.244: INFO: Got endpoints: latency-svc-jpsz6 [760.178713ms]
Aug 19 02:45:23.287: INFO: Created: latency-svc-dh4hm
Aug 19 02:45:23.302: INFO: Got endpoints: latency-svc-dh4hm [790.309769ms]
Aug 19 02:45:23.354: INFO: Created: latency-svc-gh4sf
Aug 19 02:45:23.369: INFO: Got endpoints: latency-svc-gh4sf [826.486724ms]
Aug 19 02:45:23.390: INFO: Created: latency-svc-2x4m2
Aug 19 02:45:23.411: INFO: Got endpoints: latency-svc-2x4m2 [772.441264ms]
Aug 19 02:45:23.439: INFO: Created: latency-svc-frtvs
Aug 19 02:45:23.507: INFO: Got endpoints: latency-svc-frtvs [856.237422ms]
Aug 19 02:45:23.509: INFO: Created: latency-svc-fgwgd
Aug 19 02:45:23.538: INFO: Got endpoints: latency-svc-fgwgd [832.979597ms]
Aug 19 02:45:23.563: INFO: Created: latency-svc-ngwpc
Aug 19 02:45:23.600: INFO: Got endpoints: latency-svc-ngwpc [797.905831ms]
Aug 19 02:45:23.687: INFO: Created: latency-svc-lgfgq
Aug 19 02:45:23.696: INFO: Got endpoints: latency-svc-lgfgq [880.893926ms]
Aug 19 02:45:23.744: INFO: Created: latency-svc-cnxch
Aug 19 02:45:23.761: INFO: Got endpoints: latency-svc-cnxch [896.774662ms]
Aug 19 02:45:23.860: INFO: Created: latency-svc-wzdg2
Aug 19 02:45:23.863: INFO: Got endpoints: latency-svc-wzdg2 [927.529819ms]
Aug 19 02:45:23.893: INFO: Created: latency-svc-5cqp2
Aug 19 02:45:23.910: INFO: Got endpoints: latency-svc-5cqp2 [909.679138ms]
Aug 19 02:45:24.010: INFO: Created: latency-svc-7whnt
Aug 19 02:45:24.031: INFO: Got endpoints: latency-svc-7whnt [951.615951ms]
Aug 19 02:45:24.035: INFO: Created: latency-svc-hwdz4
Aug 19 02:45:24.042: INFO: Got endpoints: latency-svc-hwdz4 [920.631249ms]
Aug 19 02:45:24.062: INFO: Created: latency-svc-bgs4p
Aug 19 02:45:24.073: INFO: Got endpoints: latency-svc-bgs4p [920.054149ms]
Aug 19 02:45:24.090: INFO: Created: latency-svc-7nghn
Aug 19 02:45:24.103: INFO: Got endpoints: latency-svc-7nghn [896.291601ms]
Aug 19 02:45:24.160: INFO: Created: latency-svc-k8bhd
Aug 19 02:45:24.170: INFO: Got endpoints: latency-svc-k8bhd [925.772985ms]
Aug 19 02:45:24.193: INFO: Created: latency-svc-4r4br
Aug 19 02:45:24.206: INFO: Got endpoints: latency-svc-4r4br [903.751346ms]
Aug 19 02:45:24.224: INFO: Created: latency-svc-z7jxs
Aug 19 02:45:24.237: INFO: Got endpoints: latency-svc-z7jxs [867.880811ms]
Aug 19 02:45:24.298: INFO: Created: latency-svc-xvj97
Aug 19 02:45:24.300: INFO: Got endpoints: latency-svc-xvj97 [889.194659ms]
Aug 19 02:45:24.349: INFO: Created: latency-svc-nhf5d
Aug 19 02:45:24.369: INFO: Got endpoints: latency-svc-nhf5d [862.129566ms]
Aug 19 02:45:24.398: INFO: Created: latency-svc-79vfp
Aug 19 02:45:24.459: INFO: Got endpoints: latency-svc-79vfp [920.630241ms]
Aug 19 02:45:24.476: INFO: Created: latency-svc-tv2tg
Aug 19 02:45:24.506: INFO: Got endpoints: latency-svc-tv2tg [905.93106ms]
Aug 19 02:45:24.547: INFO: Created: latency-svc-rmfn6
Aug 19 02:45:24.603: INFO: Got endpoints: latency-svc-rmfn6 [907.441917ms]
Aug 19 02:45:24.625: INFO: Created: latency-svc-pfxx6
Aug 19 02:45:24.640: INFO: Got endpoints: latency-svc-pfxx6 [879.267782ms]
Aug 19 02:45:24.655: INFO: Created: latency-svc-l9wbk
Aug 19 02:45:24.673: INFO: Got endpoints: latency-svc-l9wbk [809.931202ms]
Aug 19 02:45:24.699: INFO: Created: latency-svc-6f5hw
Aug 19 02:45:24.758: INFO: Got endpoints: latency-svc-6f5hw [847.691077ms]
Aug 19 02:45:24.759: INFO: Created: latency-svc-k9hgs
Aug 19 02:45:24.766: INFO: Got endpoints: latency-svc-k9hgs [734.734793ms]
Aug 19 02:45:24.811: INFO: Created: latency-svc-d2t75
Aug 19 02:45:24.827: INFO: Got endpoints: latency-svc-d2t75 [784.81109ms]
Aug 19 02:45:24.855: INFO: Created: latency-svc-zxtvm
Aug 19 02:45:24.903: INFO: Got endpoints: latency-svc-zxtvm [829.713343ms]
Aug 19 02:45:24.914: INFO: Created: latency-svc-rkgjf
Aug 19 02:45:24.930: INFO: Got endpoints: latency-svc-rkgjf [827.097763ms]
Aug 19 02:45:24.979: INFO: Created: latency-svc-2jw2k
Aug 19 02:45:24.989: INFO: Got endpoints: latency-svc-2jw2k [819.479072ms]
Aug 19 02:45:25.058: INFO: Created: latency-svc-xr2c6
Aug 19 02:45:25.068: INFO: Got endpoints: latency-svc-xr2c6 [861.219364ms]
Aug 19 02:45:25.105: INFO: Created: latency-svc-kp2fp
Aug 19 02:45:25.116: INFO: Got endpoints: latency-svc-kp2fp [879.171365ms]
Aug 19 02:45:25.136: INFO: Created: latency-svc-x5gp2
Aug 19 02:45:25.153: INFO: Got endpoints: latency-svc-x5gp2 [852.920288ms]
Aug 19 02:45:25.214: INFO: Created: latency-svc-z5x8k
Aug 19 02:45:25.219: INFO: Got endpoints: latency-svc-z5x8k [849.102121ms]
Aug 19 02:45:25.249: INFO: Created: latency-svc-l89lk
Aug 19 02:45:25.261: INFO: Got endpoints: latency-svc-l89lk [802.283339ms]
Aug 19 02:45:25.292: INFO: Created: latency-svc-x6htl
Aug 19 02:45:25.303: INFO: Got endpoints: latency-svc-x6htl [796.971643ms]
Aug 19 02:45:25.357: INFO: Created: latency-svc-rkgmf
Aug 19 02:45:25.363: INFO: Got endpoints: latency-svc-rkgmf [759.66628ms]
Aug 19 02:45:25.400: INFO: Created: latency-svc-gkm8x
Aug 19 02:45:25.418: INFO: Got endpoints: latency-svc-gkm8x [778.020497ms]
Aug 19 02:45:25.450: INFO: Created: latency-svc-79x6g
Aug 19 02:45:25.512: INFO: Got endpoints: latency-svc-79x6g [839.051627ms]
Aug 19 02:45:25.514: INFO: Created: latency-svc-w85nv
Aug 19 02:45:25.526: INFO: Got endpoints: latency-svc-w85nv [767.879603ms]
Aug 19 02:45:25.598: INFO: Created: latency-svc-zcbw2
Aug 19 02:45:25.687: INFO: Got endpoints: latency-svc-zcbw2 [920.322822ms]
Aug 19 02:45:25.699: INFO: Created: latency-svc-f6bd4
Aug 19 02:45:25.723: INFO: Got endpoints: latency-svc-f6bd4 [895.482171ms]
Aug 19 02:45:25.774: INFO: Created: latency-svc-mw4rp
Aug 19 02:45:25.842: INFO: Got endpoints: latency-svc-mw4rp [938.468272ms]
Aug 19 02:45:25.867: INFO: Created: latency-svc-8qhz5
Aug 19 02:45:25.881: INFO: Got endpoints: latency-svc-8qhz5 [950.947729ms]
Aug 19 02:45:25.902: INFO: Created: latency-svc-jj7fb
Aug 19 02:45:25.930: INFO: Got endpoints: latency-svc-jj7fb [940.404089ms]
Aug 19 02:45:26.016: INFO: Created: latency-svc-ccwnr
Aug 19 02:45:26.019: INFO: Got endpoints: latency-svc-ccwnr [950.850737ms]
Aug 19 02:45:26.054: INFO: Created: latency-svc-bq98q
Aug 19 02:45:26.068: INFO: Got endpoints: latency-svc-bq98q [951.056259ms]
Aug 19 02:45:26.090: INFO: Created: latency-svc-d24jc
Aug 19 02:45:26.098: INFO: Got endpoints: latency-svc-d24jc [944.331435ms]
Aug 19 02:45:26.166: INFO: Created: latency-svc-8flsz
Aug 19 02:45:26.169: INFO: Got endpoints: latency-svc-8flsz [950.196631ms]
Aug 19 02:45:26.198: INFO: Created: latency-svc-w8kfh
Aug 19 02:45:26.213: INFO: Got endpoints: latency-svc-w8kfh [951.834534ms]
Aug 19 02:45:26.234: INFO: Created: latency-svc-cxwps
Aug 19 02:45:26.245: INFO: Got endpoints: latency-svc-cxwps [941.142076ms]
Aug 19 02:45:26.311: INFO: Created: latency-svc-f2vxn
Aug 19 02:45:26.313: INFO: Got endpoints: latency-svc-f2vxn [949.1765ms]
Aug 19 02:45:26.347: INFO: Created: latency-svc-svsg6
Aug 19 02:45:26.371: INFO: Got endpoints: latency-svc-svsg6 [952.350561ms]
Aug 19 02:45:26.389: INFO: Created: latency-svc-s4lmt
Aug 19 02:45:26.400: INFO: Got endpoints: latency-svc-s4lmt [887.659802ms]
Aug 19 02:45:26.453: INFO: Created: latency-svc-hbmnb
Aug 19 02:45:26.460: INFO: Got endpoints: latency-svc-hbmnb [933.806931ms]
Aug 19 02:45:26.480: INFO: Created: latency-svc-cw47g
Aug 19 02:45:26.501: INFO: Got endpoints: latency-svc-cw47g [813.608701ms]
Aug 19 02:45:26.518: INFO: Created: latency-svc-qslft
Aug 19 02:45:26.528: INFO: Got endpoints: latency-svc-qslft [804.903125ms]
Aug 19 02:45:26.551: INFO: Created: latency-svc-rqdnh
Aug 19 02:45:26.615: INFO: Got endpoints: latency-svc-rqdnh [772.601664ms]
Aug 19 02:45:26.617: INFO: Created: latency-svc-7xww2
Aug 19 02:45:26.623: INFO: Got endpoints: latency-svc-7xww2 [741.07413ms]
Aug 19 02:45:26.642: INFO: Created: latency-svc-4zs5c
Aug 19 02:45:26.660: INFO: Got endpoints: latency-svc-4zs5c [729.625772ms]
Aug 19 02:45:26.690: INFO: Created: latency-svc-kxs8n
Aug 19 02:45:26.714: INFO: Got endpoints: latency-svc-kxs8n [694.681213ms]
Aug 19 02:45:26.794: INFO: Created: latency-svc-blmmz
Aug 19 02:45:26.809: INFO: Got endpoints: latency-svc-blmmz [740.741829ms]
Aug 19 02:45:26.835: INFO: Created: latency-svc-9vnbq
Aug 19 02:45:26.847: INFO: Got endpoints: latency-svc-9vnbq [748.644175ms]
Aug 19 02:45:26.870: INFO: Created: latency-svc-zn2h2
Aug 19 02:45:26.883: INFO: Got endpoints: latency-svc-zn2h2 [713.475955ms]
Aug 19 02:45:26.932: INFO: Created: latency-svc-kdf64
Aug 19 02:45:26.934: INFO: Got endpoints: latency-svc-kdf64 [720.385454ms]
Aug 19 02:45:26.965: INFO: Created: latency-svc-8wvwk
Aug 19 02:45:26.980: INFO: Got endpoints: latency-svc-8wvwk [734.89896ms]
Aug 19 02:45:27.019: INFO: Created: latency-svc-7rtwp
Aug 19 02:45:27.057: INFO: Got endpoints: latency-svc-7rtwp [744.367406ms]
Aug 19 02:45:27.068: INFO: Created: latency-svc-zgqr2
Aug 19 02:45:27.082: INFO: Got endpoints: latency-svc-zgqr2 [710.68038ms]
Aug 19 02:45:27.104: INFO: Created: latency-svc-vsxfp
Aug 19 02:45:27.113: INFO: Got endpoints: latency-svc-vsxfp [712.74183ms]
Aug 19 02:45:27.140: INFO: Created: latency-svc-k2pbj
Aug 19 02:45:27.148: INFO: Got endpoints: latency-svc-k2pbj [687.283026ms]
Aug 19 02:45:27.207: INFO: Created: latency-svc-nx92j
Aug 19 02:45:27.215: INFO: Got endpoints: latency-svc-nx92j [714.047716ms]
Aug 19 02:45:27.235: INFO: Created: latency-svc-cq89f
Aug 19 02:45:27.251: INFO: Got endpoints: latency-svc-cq89f [722.559462ms]
Aug 19 02:45:27.272: INFO: Created: latency-svc-5rrrq
Aug 19 02:45:27.281: INFO: Got endpoints: latency-svc-5rrrq [665.407993ms]
Aug 19 02:45:27.357: INFO: Created: latency-svc-gb2cp
Aug 19 02:45:27.359: INFO: Got endpoints: latency-svc-gb2cp [736.367608ms]
Aug 19 02:45:27.391: INFO: Created: latency-svc-dkng5
Aug 19 02:45:27.402: INFO: Got endpoints: latency-svc-dkng5 [741.653315ms]
Aug 19 02:45:27.420: INFO: Created: latency-svc-8dmn9
Aug 19 02:45:27.432: INFO: Got endpoints: latency-svc-8dmn9 [717.823174ms]
Aug 19 02:45:27.501: INFO: Created: latency-svc-z6npq
Aug 19 02:45:27.510: INFO: Got endpoints: latency-svc-z6npq [701.128228ms]
Aug 19 02:45:27.530: INFO: Created: latency-svc-48g7b
Aug 19 02:45:27.541: INFO: Got endpoints: latency-svc-48g7b [693.610296ms]
Aug 19 02:45:27.582: INFO: Created: latency-svc-t2jhd
Aug 19 02:45:27.595: INFO: Got endpoints: latency-svc-t2jhd [712.013858ms]
Aug 19 02:45:27.663: INFO: Created: latency-svc-l85bk
Aug 19 02:45:27.665: INFO: Got endpoints: latency-svc-l85bk [730.49862ms]
Aug 19 02:45:27.724: INFO: Created: latency-svc-4qs9f
Aug 19 02:45:27.733: INFO: Got endpoints: latency-svc-4qs9f [753.25139ms]
Aug 19 02:45:27.842: INFO: Created: latency-svc-rqr6h
Aug 19 02:45:27.845: INFO: Got endpoints: latency-svc-rqr6h [787.697258ms]
Aug 19 02:45:27.878: INFO: Created: latency-svc-kzfwp
Aug 19 02:45:27.890: INFO: Got endpoints: latency-svc-kzfwp [807.833898ms]
Aug 19 02:45:27.908: INFO: Created: latency-svc-28rh5
Aug 19 02:45:27.920: INFO: Got endpoints: latency-svc-28rh5 [807.018192ms]
Aug 19 02:45:27.938: INFO: Created: latency-svc-pjzm9
Aug 19 02:45:28.021: INFO: Got endpoints: latency-svc-pjzm9 [872.452443ms]
Aug 19 02:45:28.023: INFO: Created: latency-svc-wrd58
Aug 19 02:45:28.029: INFO: Got endpoints: latency-svc-wrd58 [813.601081ms]
Aug 19 02:45:28.053: INFO: Created: latency-svc-rvlbp
Aug 19 02:45:28.077: INFO: Got endpoints: latency-svc-rvlbp [825.643351ms]
Aug 19 02:45:28.094: INFO: Created: latency-svc-kvwsh
Aug 19 02:45:28.108: INFO: Got endpoints: latency-svc-kvwsh [826.592986ms]
Aug 19 02:45:28.161: INFO: Created: latency-svc-p6srq
Aug 19 02:45:28.164: INFO: Got endpoints: latency-svc-p6srq [803.987713ms]
Aug 19 02:45:28.198: INFO: Created: latency-svc-wb6h9
Aug 19 02:45:28.204: INFO: Got endpoints: latency-svc-wb6h9 [802.135018ms]
Aug 19 02:45:28.225: INFO: Created: latency-svc-glj7x
Aug 19 02:45:28.241: INFO: Got endpoints: latency-svc-glj7x [809.416377ms]
Aug 19 02:45:28.303: INFO: Created: latency-svc-sjgdp
Aug 19 02:45:28.328: INFO: Created: latency-svc-l6qs7
Aug 19 02:45:28.329: INFO: Got endpoints: latency-svc-sjgdp [818.443141ms]
Aug 19 02:45:28.343: INFO: Got endpoints: latency-svc-l6qs7 [801.642264ms]
Aug 19 02:45:28.364: INFO: Created: latency-svc-qnvk5
Aug 19 02:45:28.373: INFO: Got endpoints: latency-svc-qnvk5 [777.371671ms]
Aug 19 02:45:28.394: INFO: Created: latency-svc-4mcbg
Aug 19 02:45:28.459: INFO: Got endpoints: latency-svc-4mcbg [793.664722ms]
Aug 19 02:45:28.462: INFO: Created: latency-svc-xfh6b
Aug 19 02:45:28.470: INFO: Got endpoints: latency-svc-xfh6b [736.463422ms]
Aug 19 02:45:28.489: INFO: Created: latency-svc-tcc6q
Aug 19 02:45:28.500: INFO: Got endpoints: latency-svc-tcc6q [654.58357ms]
Aug 19 02:45:28.519: INFO: Created: latency-svc-66fvt
Aug 19 02:45:28.530: INFO: Got endpoints: latency-svc-66fvt [640.009523ms]
Aug 19 02:45:28.550: INFO: Created: latency-svc-xmwfk
Aug 19 02:45:28.596: INFO: Got endpoints: latency-svc-xmwfk [675.256116ms]
Aug 19 02:45:28.603: INFO: Created: latency-svc-5x4k2
Aug 19 02:45:28.627: INFO: Got endpoints: latency-svc-5x4k2 [606.443683ms]
Aug 19 02:45:28.663: INFO: Created: latency-svc-nwwh4
Aug 19 02:45:28.686: INFO: Got endpoints: latency-svc-nwwh4 [657.134301ms]
Aug 19 02:45:28.762: INFO: Created: latency-svc-swlmh
Aug 19 02:45:28.807: INFO: Got endpoints: latency-svc-swlmh [729.908511ms]
Aug 19 02:45:28.832: INFO: Created: latency-svc-fll2b
Aug 19 02:45:28.844: INFO: Got endpoints: latency-svc-fll2b [735.532924ms]
Aug 19 02:45:28.896: INFO: Created: latency-svc-jjfwh
Aug 19 02:45:28.898: INFO: Got endpoints: latency-svc-jjfwh [734.579378ms]
Aug 19 02:45:28.932: INFO: Created: latency-svc-fdzp9
Aug 19 02:45:28.946: INFO: Got endpoints: latency-svc-fdzp9 [741.641249ms]
Aug 19 02:45:28.977: INFO: Created: latency-svc-jjh6x
Aug 19 02:45:28.989: INFO: Got endpoints: latency-svc-jjh6x [747.121204ms]
Aug 19 02:45:29.057: INFO: Created: latency-svc-rmmph
Aug 19 02:45:29.067: INFO: Got endpoints: latency-svc-rmmph [737.464099ms]
Aug 19 02:45:29.100: INFO: Created: latency-svc-5rwvc
Aug 19 02:45:29.121: INFO: Got endpoints: latency-svc-5rwvc [778.439457ms]
Aug 19 02:45:29.154: INFO: Created: latency-svc-8bpxm
Aug 19 02:45:29.195: INFO: Got endpoints: latency-svc-8bpxm [822.093392ms]
Aug 19 02:45:29.215: INFO: Created: latency-svc-l7zm5
Aug 19 02:45:29.230: INFO: Got endpoints: latency-svc-l7zm5 [770.618791ms]
Aug 19 02:45:29.258: INFO: Created: latency-svc-6zppf
Aug 19 02:45:29.272: INFO: Got endpoints: latency-svc-6zppf [801.915084ms]
Aug 19 02:45:29.294: INFO: Created: latency-svc-22pj5
Aug 19 02:45:29.357: INFO: Got endpoints: latency-svc-22pj5 [856.351405ms]
Aug 19 02:45:29.359: INFO: Created: latency-svc-qxmhr
Aug 19 02:45:29.368: INFO: Got endpoints: latency-svc-qxmhr [837.86025ms]
Aug 19 02:45:29.408: INFO: Created: latency-svc-vpl9v
Aug 19 02:45:29.423: INFO: Got endpoints: latency-svc-vpl9v [826.94904ms]
Aug 19 02:45:29.449: INFO: Created: latency-svc-pww4r
Aug 19 02:45:29.542: INFO: Got endpoints: latency-svc-pww4r [914.66545ms]
Aug 19 02:45:29.544: INFO: Created: latency-svc-mwxq7
Aug 19 02:45:29.549: INFO: Got endpoints: latency-svc-mwxq7 [862.727056ms]
Aug 19 02:45:29.599: INFO: Created: latency-svc-84l4p
Aug 19 02:45:29.616: INFO: Got endpoints: latency-svc-84l4p [808.765768ms]
Aug 19 02:45:29.642: INFO: Created: latency-svc-9cklr
Aug 19 02:45:29.686: INFO: Got endpoints: latency-svc-9cklr [841.894727ms]
Aug 19 02:45:29.700: INFO: Created: latency-svc-pd9lr
Aug 19 02:45:29.724: INFO: Got endpoints: latency-svc-pd9lr [825.950134ms]
Aug 19 02:45:29.754: INFO: Created: latency-svc-n7r5v
Aug 19 02:45:29.830: INFO: Got endpoints: latency-svc-n7r5v [883.72643ms]
Aug 19 02:45:29.833: INFO: Created: latency-svc-96jpn
Aug 19 02:45:29.886: INFO: Got endpoints: latency-svc-96jpn [897.283136ms]
Aug 19 02:45:30.029: INFO: Created: latency-svc-9sqkc
Aug 19 02:45:30.031: INFO: Got endpoints: latency-svc-9sqkc [963.987147ms]
Aug 19 02:45:30.056: INFO: Created: latency-svc-fkmzc
Aug 19 02:45:30.068: INFO: Got endpoints: latency-svc-fkmzc [946.402491ms]
Aug 19 02:45:30.084: INFO: Created: latency-svc-rh7mp
Aug 19 02:45:30.100: INFO: Got endpoints: latency-svc-rh7mp [904.279079ms]
Aug 19 02:45:30.114: INFO: Created: latency-svc-k7zxq
Aug 19 02:45:30.177: INFO: Got endpoints: latency-svc-k7zxq [947.538037ms]
Aug 19 02:45:30.180: INFO: Created: latency-svc-j5lx9
Aug 19 02:45:30.188: INFO: Got endpoints: latency-svc-j5lx9 [915.992794ms]
Aug 19 02:45:30.211: INFO: Created: latency-svc-8zbnl
Aug 19 02:45:30.231: INFO: Got endpoints: latency-svc-8zbnl [874.254608ms]
Aug 19 02:45:30.247: INFO: Created: latency-svc-6xdgt
Aug 19 02:45:30.261: INFO: Got endpoints: latency-svc-6xdgt [891.867743ms]
Aug 19 02:45:30.315: INFO: Created: latency-svc-jcpxp
Aug 19 02:45:30.318: INFO: Got endpoints: latency-svc-jcpxp [894.650743ms]
Aug 19 02:45:30.348: INFO: Created: latency-svc-4kn82
Aug 19 02:45:30.363: INFO: Got endpoints: latency-svc-4kn82 [820.340423ms]
Aug 19 02:45:30.385: INFO: Created: latency-svc-7gz4p
Aug 19 02:45:30.399: INFO: Got endpoints: latency-svc-7gz4p [850.26465ms]
Aug 19 02:45:30.464: INFO: Created: latency-svc-vpvsl
Aug 19 02:45:30.467: INFO: Got endpoints: latency-svc-vpvsl [850.459946ms]
Aug 19 02:45:30.522: INFO: Created: latency-svc-q6ktb
Aug 19 02:45:30.544: INFO: Got endpoints: latency-svc-q6ktb [857.887442ms]
Aug 19 02:45:30.632: INFO: Created: latency-svc-h9g5w
Aug 19 02:45:30.680: INFO: Got endpoints: latency-svc-h9g5w [954.872095ms]
Aug 19 02:45:30.794: INFO: Created: latency-svc-x2tjz
Aug 19 02:45:30.811: INFO: Got endpoints: latency-svc-x2tjz [981.581751ms]
Aug 19 02:45:30.877: INFO: Created: latency-svc-pw2b6
Aug 19 02:45:30.967: INFO: Got endpoints: latency-svc-pw2b6 [1.0806619s]
Aug 19 02:45:30.969: INFO: Created: latency-svc-gdnbt
Aug 19 02:45:30.989: INFO: Got endpoints: latency-svc-gdnbt [957.702058ms]
Aug 19 02:45:31.063: INFO: Created: latency-svc-hbfsx
Aug 19 02:45:31.135: INFO: Got endpoints: latency-svc-hbfsx [1.067265609s]
Aug 19 02:45:31.139: INFO: Created: latency-svc-qv7v4
Aug 19 02:45:31.151: INFO: Got endpoints: latency-svc-qv7v4 [1.051280828s]
Aug 19 02:45:31.184: INFO: Created: latency-svc-8x8np
Aug 19 02:45:31.193: INFO: Got endpoints: latency-svc-8x8np [1.015218147s]
Aug 19 02:45:31.226: INFO: Created: latency-svc-kx255
Aug 19 02:45:31.309: INFO: Got endpoints: latency-svc-kx255 [1.119343495s]
Aug 19 02:45:31.312: INFO: Created: latency-svc-jp4rl
Aug 19 02:45:31.319: INFO: Got endpoints: latency-svc-jp4rl [1.087619699s]
Aug 19 02:45:31.341: INFO: Created: latency-svc-8x6d8
Aug 19 02:45:31.356: INFO: Got endpoints: latency-svc-8x6d8 [1.095113534s]
Aug 19 02:45:31.376: INFO: Created: latency-svc-z2kz2
Aug 19 02:45:31.392: INFO: Got endpoints: latency-svc-z2kz2 [1.074247103s]
Aug 19 02:45:31.447: INFO: Created: latency-svc-ltggt
Aug 19 02:45:31.450: INFO: Got endpoints: latency-svc-ltggt [1.086502564s]
Aug 19 02:45:31.525: INFO: Created: latency-svc-pqfwz
Aug 19 02:45:31.537: INFO: Got endpoints: latency-svc-pqfwz [1.137496283s]
Aug 19 02:45:31.614: INFO: Created: latency-svc-7m26m
Aug 19 02:45:31.617: INFO: Got endpoints: latency-svc-7m26m [1.149401748s]
Aug 19 02:45:31.659: INFO: Created: latency-svc-gwrq7
Aug 19 02:45:31.669: INFO: Got endpoints: latency-svc-gwrq7 [1.124707793s]
Aug 19 02:45:31.693: INFO: Created: latency-svc-8m8fm
Aug 19 02:45:31.706: INFO: Got endpoints: latency-svc-8m8fm [1.026181066s]
Aug 19 02:45:31.764: INFO: Created: latency-svc-6ffrn
Aug 19 02:45:31.766: INFO: Got endpoints: latency-svc-6ffrn [954.442953ms]
Aug 19 02:45:31.839: INFO: Created: latency-svc-ls2fw
Aug 19 02:45:31.856: INFO: Got endpoints: latency-svc-ls2fw [888.480456ms]
Aug 19 02:45:31.902: INFO: Created: latency-svc-ds24h
Aug 19 02:45:31.914: INFO: Got endpoints: latency-svc-ds24h [925.076388ms]
Aug 19 02:45:31.945: INFO: Created: latency-svc-g7d92
Aug 19 02:45:31.988: INFO: Got endpoints: latency-svc-g7d92 [852.181213ms]
Aug 19 02:45:32.058: INFO: Created: latency-svc-vnjbj
Aug 19 02:45:32.077: INFO: Got endpoints: latency-svc-vnjbj [925.630328ms]
Aug 19 02:45:32.108: INFO: Created: latency-svc-fjtqg
Aug 19 02:45:32.121: INFO: Got endpoints: latency-svc-fjtqg [928.120368ms]
Aug 19 02:45:32.143: INFO: Created: latency-svc-q95bb
Aug 19 02:45:32.207: INFO: Got endpoints: latency-svc-q95bb [898.01846ms]
Aug 19 02:45:32.209: INFO: Created: latency-svc-cf5x4
Aug 19 02:45:32.218: INFO: Got endpoints: latency-svc-cf5x4 [898.592298ms]
Aug 19 02:45:32.241: INFO: Created: latency-svc-6vskf
Aug 19 02:45:32.254: INFO: Got endpoints: latency-svc-6vskf [897.609704ms]
Aug 19 02:45:32.276: INFO: Created: latency-svc-f6lmq
Aug 19 02:45:32.284: INFO: Got endpoints: latency-svc-f6lmq [891.754114ms]
Aug 19 02:45:32.306: INFO: Created: latency-svc-v84qv
Aug 19 02:45:32.339: INFO: Got endpoints: latency-svc-v84qv [888.687626ms]
Aug 19 02:45:32.340: INFO: Latencies: [94.2028ms 105.392117ms 135.413161ms 206.826532ms 243.6951ms 274.022033ms 372.62023ms 400.184056ms 437.811993ms 533.475896ms 606.443683ms 611.593493ms 640.009523ms 654.58357ms 657.134301ms 665.407993ms 675.256116ms 687.283026ms 693.610296ms 694.681213ms 701.128228ms 710.68038ms 712.013858ms 712.74183ms 713.475955ms 714.047716ms 717.823174ms 720.385454ms 722.559462ms 729.625772ms 729.908511ms 730.49862ms 734.579378ms 734.734793ms 734.89896ms 735.532924ms 736.367608ms 736.463422ms 737.464099ms 740.741829ms 741.07413ms 741.641249ms 741.653315ms 744.367406ms 747.121204ms 748.644175ms 753.25139ms 759.66628ms 760.178713ms 767.879603ms 770.618791ms 772.441264ms 772.601664ms 777.371671ms 778.020497ms 778.439457ms 780.669514ms 784.81109ms 787.697258ms 790.309769ms 793.664722ms 794.875784ms 796.971643ms 797.314893ms 797.905831ms 801.642264ms 801.915084ms 802.135018ms 802.283339ms 803.987713ms 804.888669ms 804.903125ms 807.018192ms 807.833898ms 808.765768ms 809.416377ms 809.931202ms 813.601081ms 813.608701ms 818.443141ms 819.479072ms 820.340423ms 822.093392ms 824.965179ms 825.119602ms 825.643351ms 825.950134ms 826.486724ms 826.592986ms 826.94904ms 827.097763ms 829.713343ms 832.69062ms 832.979597ms 837.86025ms 839.051627ms 841.894727ms 847.691077ms 849.102121ms 850.26465ms 850.459946ms 852.181213ms 852.920288ms 854.409802ms 855.163101ms 856.237422ms 856.351405ms 856.907718ms 857.887442ms 861.219364ms 862.129566ms 862.727056ms 867.880811ms 872.452443ms 873.487015ms 874.254608ms 879.171365ms 879.267782ms 879.431843ms 880.893926ms 883.72643ms 887.659802ms 888.480456ms 888.687626ms 889.194659ms 891.754114ms 891.867743ms 894.604106ms 894.650743ms 895.482171ms 896.291601ms 896.774662ms 897.283136ms 897.609704ms 898.01846ms 898.592298ms 903.751346ms 904.279079ms 905.93106ms 907.441917ms 909.679138ms 914.66545ms 915.992794ms 920.054149ms 920.322822ms 920.630241ms 920.631249ms 925.076388ms 925.630328ms 925.772985ms 927.529819ms 928.120368ms 929.819722ms 933.806931ms 938.468272ms 940.404089ms 941.142076ms 944.331435ms 946.402491ms 947.538037ms 949.1765ms 950.196631ms 950.850737ms 950.947729ms 951.056259ms 951.615951ms 951.834534ms 952.350561ms 954.442953ms 954.872095ms 957.702058ms 963.074906ms 963.987147ms 977.182446ms 981.581751ms 995.333316ms 1.010591403s 1.015218147s 1.026181066s 1.051280828s 1.067265609s 1.074247103s 1.076714429s 1.078061022s 1.0806619s 1.082763363s 1.086502564s 1.087619699s 1.095113534s 1.108727568s 1.110316742s 1.110972186s 1.113243638s 1.119343495s 1.124707793s 1.126376074s 1.131994717s 1.134468643s 1.137496283s 1.149401748s]
Aug 19 02:45:32.344: INFO: 50 %ile: 850.459946ms
Aug 19 02:45:32.344: INFO: 90 %ile: 1.067265609s
Aug 19 02:45:32.344: INFO: 99 %ile: 1.137496283s
Aug 19 02:45:32.344: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:45:32.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2522" for this suite.
Aug 19 02:46:08.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:46:08.526: INFO: namespace svc-latency-2522 deletion completed in 36.171361585s

• [SLOW TEST:53.176 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:46:08.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 19 02:46:08.784: INFO: namespace kubectl-6377
Aug 19 02:46:08.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6377'
Aug 19 02:46:15.188: INFO: stderr: ""
Aug 19 02:46:15.188: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 19 02:46:16.202: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:46:16.202: INFO: Found 0 / 1
Aug 19 02:46:17.405: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:46:17.405: INFO: Found 0 / 1
Aug 19 02:46:18.195: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:46:18.195: INFO: Found 0 / 1
Aug 19 02:46:19.214: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:46:19.214: INFO: Found 1 / 1
Aug 19 02:46:19.214: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 19 02:46:19.219: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 02:46:19.219: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 19 02:46:19.219: INFO: wait on redis-master startup in kubectl-6377 
Aug 19 02:46:19.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vtv2z redis-master --namespace=kubectl-6377'
Aug 19 02:46:20.363: INFO: stderr: ""
Aug 19 02:46:20.363: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Aug 02:46:18.259 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Aug 02:46:18.259 # Server started, Redis version 3.2.12\n1:M 19 Aug 02:46:18.259 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Aug 02:46:18.259 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 19 02:46:20.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6377'
Aug 19 02:46:21.760: INFO: stderr: ""
Aug 19 02:46:21.760: INFO: stdout: "service/rm2 exposed\n"
Aug 19 02:46:21.886: INFO: Service rm2 in namespace kubectl-6377 found.
STEP: exposing service
Aug 19 02:46:23.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6377'
Aug 19 02:46:25.195: INFO: stderr: ""
Aug 19 02:46:25.195: INFO: stdout: "service/rm3 exposed\n"
Aug 19 02:46:25.597: INFO: Service rm3 in namespace kubectl-6377 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:46:27.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6377" for this suite.
Aug 19 02:46:51.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:46:51.853: INFO: namespace kubectl-6377 deletion completed in 24.146467245s

• [SLOW TEST:43.327 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:46:51.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 19 02:46:58.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-329af63f-3880-41bd-b371-4bac4aedcdf6 -c busybox-main-container --namespace=emptydir-8372 -- cat /usr/share/volumeshare/shareddata.txt'
Aug 19 02:46:59.323: INFO: stderr: "I0819 02:46:59.232006    1593 log.go:172] (0x282e8c0) (0x282e930) Create stream\nI0819 02:46:59.234228    1593 log.go:172] (0x282e8c0) (0x282e930) Stream added, broadcasting: 1\nI0819 02:46:59.251730    1593 log.go:172] (0x282e8c0) Reply frame received for 1\nI0819 02:46:59.252208    1593 log.go:172] (0x282e8c0) (0x280ea80) Create stream\nI0819 02:46:59.252269    1593 log.go:172] (0x282e8c0) (0x280ea80) Stream added, broadcasting: 3\nI0819 02:46:59.253360    1593 log.go:172] (0x282e8c0) Reply frame received for 3\nI0819 02:46:59.253568    1593 log.go:172] (0x282e8c0) (0x28b8380) Create stream\nI0819 02:46:59.253623    1593 log.go:172] (0x282e8c0) (0x28b8380) Stream added, broadcasting: 5\nI0819 02:46:59.254474    1593 log.go:172] (0x282e8c0) Reply frame received for 5\nI0819 02:46:59.308082    1593 log.go:172] (0x282e8c0) Data frame received for 5\nI0819 02:46:59.308280    1593 log.go:172] (0x28b8380) (5) Data frame handling\nI0819 02:46:59.308429    1593 log.go:172] (0x282e8c0) Data frame received for 3\nI0819 02:46:59.308621    1593 log.go:172] (0x280ea80) (3) Data frame handling\nI0819 02:46:59.308808    1593 log.go:172] (0x282e8c0) Data frame received for 1\nI0819 02:46:59.308934    1593 log.go:172] (0x282e930) (1) Data frame handling\nI0819 02:46:59.309340    1593 log.go:172] (0x280ea80) (3) Data frame sent\nI0819 02:46:59.309622    1593 log.go:172] (0x282e930) (1) Data frame sent\nI0819 02:46:59.309874    1593 log.go:172] (0x282e8c0) Data frame received for 3\nI0819 02:46:59.309943    1593 log.go:172] (0x280ea80) (3) Data frame handling\nI0819 02:46:59.310758    1593 log.go:172] (0x282e8c0) (0x282e930) Stream removed, broadcasting: 1\nI0819 02:46:59.312320    1593 log.go:172] (0x282e8c0) Go away received\nI0819 02:46:59.314914    1593 log.go:172] (0x282e8c0) (0x282e930) Stream removed, broadcasting: 1\nI0819 02:46:59.315140    1593 log.go:172] (0x282e8c0) (0x280ea80) Stream removed, broadcasting: 3\nI0819 02:46:59.315324    1593 log.go:172] (0x282e8c0) (0x28b8380) Stream removed, broadcasting: 5\n"
Aug 19 02:46:59.324: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:46:59.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8372" for this suite.
Aug 19 02:47:05.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:47:05.553: INFO: namespace emptydir-8372 deletion completed in 6.162833275s

• [SLOW TEST:13.699 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:47:05.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 19 02:47:05.695: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:965998,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 02:47:05.697: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:965998,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 19 02:47:15.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:966020,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 19 02:47:15.710: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:966020,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 19 02:47:25.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:966040,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 02:47:25.724: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:966040,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 19 02:47:35.735: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:966061,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 02:47:35.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:4502c276-d7d4-4ae2-8eac-c886b4590d81,ResourceVersion:966061,Generation:0,CreationTimestamp:2020-08-19 02:47:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 19 02:47:45.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:fd89daad-28c1-4154-8cb5-9a7fdcd449e3,ResourceVersion:966081,Generation:0,CreationTimestamp:2020-08-19 02:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 02:47:45.746: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:fd89daad-28c1-4154-8cb5-9a7fdcd449e3,ResourceVersion:966081,Generation:0,CreationTimestamp:2020-08-19 02:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 19 02:47:55.754: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:fd89daad-28c1-4154-8cb5-9a7fdcd449e3,ResourceVersion:966101,Generation:0,CreationTimestamp:2020-08-19 02:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 02:47:55.755: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:fd89daad-28c1-4154-8cb5-9a7fdcd449e3,ResourceVersion:966101,Generation:0,CreationTimestamp:2020-08-19 02:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:48:05.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1743" for this suite.
Aug 19 02:48:11.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:48:11.995: INFO: namespace watch-1743 deletion completed in 6.225424894s

• [SLOW TEST:66.441 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:48:11.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 02:48:12.137: INFO: Waiting up to 5m0s for pod "downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1" in namespace "downward-api-5496" to be "success or failure"
Aug 19 02:48:12.190: INFO: Pod "downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 52.973505ms
Aug 19 02:48:14.302: INFO: Pod "downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164393858s
Aug 19 02:48:16.309: INFO: Pod "downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171897175s
STEP: Saw pod success
Aug 19 02:48:16.310: INFO: Pod "downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1" satisfied condition "success or failure"
Aug 19 02:48:16.315: INFO: Trying to get logs from node iruya-worker2 pod downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1 container dapi-container: 
STEP: delete the pod
Aug 19 02:48:16.336: INFO: Waiting for pod downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1 to disappear
Aug 19 02:48:16.340: INFO: Pod downward-api-9f99dc53-3b68-4918-ae48-2006f12b03e1 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:48:16.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5496" for this suite.
Aug 19 02:48:22.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:48:22.650: INFO: namespace downward-api-5496 deletion completed in 6.302450902s

• [SLOW TEST:10.655 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:48:22.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 19 02:48:22.780: INFO: Waiting up to 5m0s for pod "pod-40f824f1-33e8-49fb-bdb3-75d266283417" in namespace "emptydir-9781" to be "success or failure"
Aug 19 02:48:22.785: INFO: Pod "pod-40f824f1-33e8-49fb-bdb3-75d266283417": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541864ms
Aug 19 02:48:24.791: INFO: Pod "pod-40f824f1-33e8-49fb-bdb3-75d266283417": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010496552s
Aug 19 02:48:26.799: INFO: Pod "pod-40f824f1-33e8-49fb-bdb3-75d266283417": Phase="Running", Reason="", readiness=true. Elapsed: 4.01817178s
Aug 19 02:48:28.805: INFO: Pod "pod-40f824f1-33e8-49fb-bdb3-75d266283417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024814223s
STEP: Saw pod success
Aug 19 02:48:28.806: INFO: Pod "pod-40f824f1-33e8-49fb-bdb3-75d266283417" satisfied condition "success or failure"
Aug 19 02:48:28.810: INFO: Trying to get logs from node iruya-worker pod pod-40f824f1-33e8-49fb-bdb3-75d266283417 container test-container: 
STEP: delete the pod
Aug 19 02:48:28.851: INFO: Waiting for pod pod-40f824f1-33e8-49fb-bdb3-75d266283417 to disappear
Aug 19 02:48:28.892: INFO: Pod pod-40f824f1-33e8-49fb-bdb3-75d266283417 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:48:28.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9781" for this suite.
Aug 19 02:48:34.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:48:35.035: INFO: namespace emptydir-9781 deletion completed in 6.129378057s

• [SLOW TEST:12.381 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:48:35.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0819 02:48:46.870383       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 02:48:46.870: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:48:46.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7538" for this suite.
Aug 19 02:48:53.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:48:53.206: INFO: namespace gc-7538 deletion completed in 6.325316316s

• [SLOW TEST:18.168 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:48:53.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Aug 19 02:48:53.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7821 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 19 02:48:57.823: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0819 02:48:57.695620    1615 log.go:172] (0x2b14070) (0x2b140e0) Create stream\nI0819 02:48:57.698069    1615 log.go:172] (0x2b14070) (0x2b140e0) Stream added, broadcasting: 1\nI0819 02:48:57.707404    1615 log.go:172] (0x2b14070) Reply frame received for 1\nI0819 02:48:57.707969    1615 log.go:172] (0x2b14070) (0x261c0e0) Create stream\nI0819 02:48:57.708048    1615 log.go:172] (0x2b14070) (0x261c0e0) Stream added, broadcasting: 3\nI0819 02:48:57.709738    1615 log.go:172] (0x2b14070) Reply frame received for 3\nI0819 02:48:57.710055    1615 log.go:172] (0x2b14070) (0x24ac2a0) Create stream\nI0819 02:48:57.710145    1615 log.go:172] (0x2b14070) (0x24ac2a0) Stream added, broadcasting: 5\nI0819 02:48:57.711245    1615 log.go:172] (0x2b14070) Reply frame received for 5\nI0819 02:48:57.711600    1615 log.go:172] (0x2b14070) (0x2b14150) Create stream\nI0819 02:48:57.711686    1615 log.go:172] (0x2b14070) (0x2b14150) Stream added, broadcasting: 7\nI0819 02:48:57.713116    1615 log.go:172] (0x2b14070) Reply frame received for 7\nI0819 02:48:57.715131    1615 log.go:172] (0x261c0e0) (3) Writing data frame\nI0819 02:48:57.715987    1615 log.go:172] (0x261c0e0) (3) Writing data frame\nI0819 02:48:57.716907    1615 log.go:172] (0x2b14070) Data frame received for 5\nI0819 02:48:57.717090    1615 log.go:172] (0x24ac2a0) (5) Data frame handling\nI0819 02:48:57.717382    1615 log.go:172] (0x24ac2a0) (5) Data frame sent\nI0819 02:48:57.717988    1615 log.go:172] (0x2b14070) Data frame received for 5\nI0819 02:48:57.718101    1615 log.go:172] (0x24ac2a0) (5) Data frame handling\nI0819 02:48:57.718224    1615 log.go:172] (0x24ac2a0) (5) Data frame sent\nI0819 02:48:57.748263    1615 log.go:172] (0x2b14070) Data frame received for 7\nI0819 02:48:57.748357    1615 log.go:172] (0x2b14150) (7) Data frame handling\nI0819 02:48:57.748630    1615 log.go:172] (0x2b14070) Data frame received for 5\nI0819 02:48:57.748901    1615 log.go:172] (0x24ac2a0) (5) Data frame handling\nI0819 02:48:57.749281    1615 log.go:172] (0x2b14070) Data frame received for 1\nI0819 02:48:57.749423    1615 log.go:172] (0x2b140e0) (1) Data frame handling\nI0819 02:48:57.749586    1615 log.go:172] (0x2b140e0) (1) Data frame sent\nI0819 02:48:57.750280    1615 log.go:172] (0x2b14070) (0x2b140e0) Stream removed, broadcasting: 1\nI0819 02:48:57.751155    1615 log.go:172] (0x2b14070) (0x261c0e0) Stream removed, broadcasting: 3\nI0819 02:48:57.753733    1615 log.go:172] (0x2b14070) Go away received\nI0819 02:48:57.754076    1615 log.go:172] (0x2b14070) (0x2b140e0) Stream removed, broadcasting: 1\nI0819 02:48:57.754839    1615 log.go:172] (0x2b14070) (0x261c0e0) Stream removed, broadcasting: 3\nI0819 02:48:57.760255    1615 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0x24ac2a0), 0x7:(*spdystream.Stream)(0x2b14150)}\nI0819 02:48:57.760569    1615 log.go:172] (0x2b14070) (0x24ac2a0) Stream removed, broadcasting: 5\nI0819 02:48:57.761224    1615 log.go:172] Streams opened: 1, map[spdy.StreamId]*spdystream.Stream{0x7:(*spdystream.Stream)(0x2b14150)}\nI0819 02:48:57.761843    1615 log.go:172] (0x2b14070) (0x2b14150) Stream removed, broadcasting: 7\n"
Aug 19 02:48:57.824: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:48:59.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7821" for this suite.
Aug 19 02:49:05.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:49:06.039: INFO: namespace kubectl-7821 deletion completed in 6.14264409s

• [SLOW TEST:12.830 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:49:06.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:49:35.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3555" for this suite.
Aug 19 02:49:41.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:49:41.459: INFO: namespace namespaces-3555 deletion completed in 6.125909063s
STEP: Destroying namespace "nsdeletetest-6731" for this suite.
Aug 19 02:49:41.462: INFO: Namespace nsdeletetest-6731 was already deleted
STEP: Destroying namespace "nsdeletetest-8583" for this suite.
Aug 19 02:49:47.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:49:47.582: INFO: namespace nsdeletetest-8583 deletion completed in 6.120408559s

• [SLOW TEST:41.542 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:49:47.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-4bb0788a-b24d-477d-b135-d32708940851
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:49:47.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4783" for this suite.
Aug 19 02:49:53.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:49:53.833: INFO: namespace secrets-4783 deletion completed in 6.117322208s

• [SLOW TEST:6.249 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:49:53.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 02:49:58.487: INFO: Successfully updated pod "labelsupdate62d2f605-b22e-4078-9573-10c6ae0d0eec"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:50:00.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4629" for this suite.
Aug 19 02:50:18.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:50:18.662: INFO: namespace downward-api-4629 deletion completed in 18.134557591s

• [SLOW TEST:24.829 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:50:18.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 02:50:24.139: INFO: Successfully updated pod "annotationupdateb28d5d77-e680-4d01-9b1c-3806d26e6475"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:50:28.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9247" for this suite.
Aug 19 02:50:50.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:50:50.312: INFO: namespace projected-9247 deletion completed in 22.124614677s

• [SLOW TEST:31.649 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:50:50.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-fgms
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 02:50:50.398: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fgms" in namespace "subpath-2328" to be "success or failure"
Aug 19 02:50:50.418: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Pending", Reason="", readiness=false. Elapsed: 19.812635ms
Aug 19 02:50:52.423: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0247347s
Aug 19 02:50:54.430: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 4.031305072s
Aug 19 02:50:56.436: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 6.037412675s
Aug 19 02:50:58.442: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 8.043673349s
Aug 19 02:51:00.448: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 10.049739677s
Aug 19 02:51:02.453: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 12.055046184s
Aug 19 02:51:04.459: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 14.060450997s
Aug 19 02:51:06.465: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 16.066793212s
Aug 19 02:51:08.472: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 18.073072322s
Aug 19 02:51:10.476: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 20.077604015s
Aug 19 02:51:12.483: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Running", Reason="", readiness=true. Elapsed: 22.084290711s
Aug 19 02:51:14.489: INFO: Pod "pod-subpath-test-downwardapi-fgms": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.090687418s
STEP: Saw pod success
Aug 19 02:51:14.489: INFO: Pod "pod-subpath-test-downwardapi-fgms" satisfied condition "success or failure"
Aug 19 02:51:14.494: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-fgms container test-container-subpath-downwardapi-fgms: 
STEP: delete the pod
Aug 19 02:51:14.554: INFO: Waiting for pod pod-subpath-test-downwardapi-fgms to disappear
Aug 19 02:51:14.608: INFO: Pod pod-subpath-test-downwardapi-fgms no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fgms
Aug 19 02:51:14.608: INFO: Deleting pod "pod-subpath-test-downwardapi-fgms" in namespace "subpath-2328"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:51:14.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2328" for this suite.
Aug 19 02:51:20.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:51:21.036: INFO: namespace subpath-2328 deletion completed in 6.415373375s

• [SLOW TEST:30.723 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:51:21.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 19 02:51:21.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4767,SelfLink:/api/v1/namespaces/watch-4767/configmaps/e2e-watch-test-resource-version,UID:d10a6ff4-0f1a-489f-8ce7-34823dfcba27,ResourceVersion:966897,Generation:0,CreationTimestamp:2020-08-19 02:51:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 02:51:21.317: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4767,SelfLink:/api/v1/namespaces/watch-4767/configmaps/e2e-watch-test-resource-version,UID:d10a6ff4-0f1a-489f-8ce7-34823dfcba27,ResourceVersion:966898,Generation:0,CreationTimestamp:2020-08-19 02:51:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:51:21.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4767" for this suite.
Aug 19 02:51:27.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:51:27.483: INFO: namespace watch-4767 deletion completed in 6.154297973s

• [SLOW TEST:6.444 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:51:27.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 02:51:27.538: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 19 02:51:27.552: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 19 02:51:32.559: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 19 02:51:32.559: INFO: Creating deployment "test-rolling-update-deployment"
Aug 19 02:51:32.565: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 19 02:51:32.616: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 19 02:51:34.628: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 19 02:51:34.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733402292, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733402292, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733402292, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733402292, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 02:51:36.637: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 02:51:36.652: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2346,SelfLink:/apis/apps/v1/namespaces/deployment-2346/deployments/test-rolling-update-deployment,UID:24dec506-6d66-49a1-94a2-62d5b5470199,ResourceVersion:966979,Generation:1,CreationTimestamp:2020-08-19 02:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-19 02:51:32 +0000 UTC 2020-08-19 02:51:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-19 02:51:35 +0000 UTC 2020-08-19 02:51:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 19 02:51:36.658: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2346,SelfLink:/apis/apps/v1/namespaces/deployment-2346/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:09a612b6-d0fe-4f97-849d-14ef607717f0,ResourceVersion:966968,Generation:1,CreationTimestamp:2020-08-19 02:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 24dec506-6d66-49a1-94a2-62d5b5470199 0x8c982e7 0x8c982e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 19 02:51:36.658: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 19 02:51:36.659: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2346,SelfLink:/apis/apps/v1/namespaces/deployment-2346/replicasets/test-rolling-update-controller,UID:82fc665c-1731-4110-8c4d-f81eb4261d31,ResourceVersion:966977,Generation:2,CreationTimestamp:2020-08-19 02:51:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 24dec506-6d66-49a1-94a2-62d5b5470199 0x8c98217 0x8c98218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 02:51:36.664: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-pdh47" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-pdh47,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2346,SelfLink:/api/v1/namespaces/deployment-2346/pods/test-rolling-update-deployment-79f6b9d75c-pdh47,UID:545992ca-fb48-42e6-87ef-715d73a48292,ResourceVersion:966967,Generation:0,CreationTimestamp:2020-08-19 02:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 09a612b6-d0fe-4f97-849d-14ef607717f0 0x8c98fc7 0x8c98fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wxq6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wxq6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wxq6h true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8c990c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8c990f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:51:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:51:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 02:51:32 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.82,StartTime:2020-08-19 02:51:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-19 02:51:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3a7a13df5bd88c26a77428b960f71c63c376707ea7583ff3d26fbe8277892566}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:51:36.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2346" for this suite.
Aug 19 02:51:42.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:51:42.898: INFO: namespace deployment-2346 deletion completed in 6.225961868s

• [SLOW TEST:15.414 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:51:42.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 19 02:51:43.027: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:43.071: INFO: Number of nodes with available pods: 0
Aug 19 02:51:43.072: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:44.161: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:44.168: INFO: Number of nodes with available pods: 0
Aug 19 02:51:44.168: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:45.083: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:45.087: INFO: Number of nodes with available pods: 0
Aug 19 02:51:45.087: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:46.200: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:46.205: INFO: Number of nodes with available pods: 0
Aug 19 02:51:46.205: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:47.084: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:47.090: INFO: Number of nodes with available pods: 2
Aug 19 02:51:47.090: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 19 02:51:47.119: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:47.140: INFO: Number of nodes with available pods: 1
Aug 19 02:51:47.140: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:48.171: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:48.178: INFO: Number of nodes with available pods: 1
Aug 19 02:51:48.179: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:49.152: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:49.157: INFO: Number of nodes with available pods: 1
Aug 19 02:51:49.158: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:50.152: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:50.159: INFO: Number of nodes with available pods: 1
Aug 19 02:51:50.159: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 02:51:51.151: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 02:51:51.156: INFO: Number of nodes with available pods: 2
Aug 19 02:51:51.156: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4328, will wait for the garbage collector to delete the pods
Aug 19 02:51:51.226: INFO: Deleting DaemonSet.extensions daemon-set took: 7.196623ms
Aug 19 02:51:51.327: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.573358ms
Aug 19 02:52:03.337: INFO: Number of nodes with available pods: 0
Aug 19 02:52:03.337: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 02:52:03.351: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4328/daemonsets","resourceVersion":"967114"},"items":null}

Aug 19 02:52:03.389: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4328/pods","resourceVersion":"967114"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:52:03.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4328" for this suite.
Aug 19 02:52:09.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:52:09.545: INFO: namespace daemonsets-4328 deletion completed in 6.12928133s

• [SLOW TEST:26.646 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:52:09.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Aug 19 02:52:09.640: INFO: Waiting up to 5m0s for pod "client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959" in namespace "containers-3079" to be "success or failure"
Aug 19 02:52:09.649: INFO: Pod "client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803169ms
Aug 19 02:52:11.662: INFO: Pod "client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021623395s
Aug 19 02:52:13.668: INFO: Pod "client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027734103s
STEP: Saw pod success
Aug 19 02:52:13.669: INFO: Pod "client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959" satisfied condition "success or failure"
Aug 19 02:52:13.673: INFO: Trying to get logs from node iruya-worker pod client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959 container test-container: 
STEP: delete the pod
Aug 19 02:52:13.796: INFO: Waiting for pod client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959 to disappear
Aug 19 02:52:13.801: INFO: Pod client-containers-dfbea8bf-caf7-4ffc-930f-c4806c5c7959 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:52:13.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3079" for this suite.
Aug 19 02:52:19.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:52:19.943: INFO: namespace containers-3079 deletion completed in 6.133763539s

• [SLOW TEST:10.395 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:52:19.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-c76db76d-d173-4133-a4d7-4bfe021d70ba
STEP: Creating a pod to test consume configMaps
Aug 19 02:52:20.030: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058" in namespace "projected-2029" to be "success or failure"
Aug 19 02:52:20.094: INFO: Pod "pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058": Phase="Pending", Reason="", readiness=false. Elapsed: 64.044866ms
Aug 19 02:52:22.101: INFO: Pod "pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070416447s
Aug 19 02:52:24.108: INFO: Pod "pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07767195s
STEP: Saw pod success
Aug 19 02:52:24.108: INFO: Pod "pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058" satisfied condition "success or failure"
Aug 19 02:52:24.113: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 02:52:24.227: INFO: Waiting for pod pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058 to disappear
Aug 19 02:52:24.251: INFO: Pod pod-projected-configmaps-0ed7d526-dafb-4367-a2be-b16340f2e058 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:52:24.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2029" for this suite.
Aug 19 02:52:30.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:52:30.394: INFO: namespace projected-2029 deletion completed in 6.136825495s

• [SLOW TEST:10.450 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:52:30.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9468abb4-98ab-445f-b1b3-caf9ebc00aa1
STEP: Creating a pod to test consume configMaps
Aug 19 02:52:30.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849" in namespace "configmap-3353" to be "success or failure"
Aug 19 02:52:30.501: INFO: Pod "pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849": Phase="Pending", Reason="", readiness=false. Elapsed: 10.414533ms
Aug 19 02:52:32.506: INFO: Pod "pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015567537s
Aug 19 02:52:34.512: INFO: Pod "pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021447512s
STEP: Saw pod success
Aug 19 02:52:34.512: INFO: Pod "pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849" satisfied condition "success or failure"
Aug 19 02:52:34.525: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849 container configmap-volume-test: 
STEP: delete the pod
Aug 19 02:52:34.581: INFO: Waiting for pod pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849 to disappear
Aug 19 02:52:34.596: INFO: Pod pod-configmaps-f6afc77a-ae36-4379-a41b-864bdcd48849 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:52:34.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3353" for this suite.
Aug 19 02:52:40.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:52:40.876: INFO: namespace configmap-3353 deletion completed in 6.274178895s

• [SLOW TEST:10.481 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:52:40.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-cdc70e1e-75eb-4b43-922e-3ba63d37658c
STEP: Creating a pod to test consume secrets
Aug 19 02:52:41.527: INFO: Waiting up to 5m0s for pod "pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73" in namespace "secrets-6335" to be "success or failure"
Aug 19 02:52:41.603: INFO: Pod "pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 76.104933ms
Aug 19 02:52:43.611: INFO: Pod "pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083832763s
Aug 19 02:52:45.744: INFO: Pod "pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216955374s
STEP: Saw pod success
Aug 19 02:52:45.744: INFO: Pod "pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73" satisfied condition "success or failure"
Aug 19 02:52:45.748: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73 container secret-volume-test: 
STEP: delete the pod
Aug 19 02:52:45.820: INFO: Waiting for pod pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73 to disappear
Aug 19 02:52:45.956: INFO: Pod pod-secrets-d7ee40b9-5961-4364-91ad-6cbb7d5a5f73 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:52:45.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6335" for this suite.
Aug 19 02:52:52.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:52:52.157: INFO: namespace secrets-6335 deletion completed in 6.192936696s

• [SLOW TEST:11.279 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:52:52.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0819 02:53:02.247056       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 02:53:02.247: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:53:02.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2217" for this suite.
Aug 19 02:53:08.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:53:08.363: INFO: namespace gc-2217 deletion completed in 6.110612454s

• [SLOW TEST:16.205 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:53:08.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-73116a42-c1f7-446b-92bd-13c325c3f5a4
STEP: Creating secret with name s-test-opt-upd-2a43db25-7dbe-477f-be1e-654cc785c4c8
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-73116a42-c1f7-446b-92bd-13c325c3f5a4
STEP: Updating secret s-test-opt-upd-2a43db25-7dbe-477f-be1e-654cc785c4c8
STEP: Creating secret with name s-test-opt-create-8c59956e-1c46-434c-b05c-50c1901564f2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:54:44.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3066" for this suite.
Aug 19 02:55:06.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:55:07.056: INFO: namespace secrets-3066 deletion completed in 22.161371434s

• [SLOW TEST:118.691 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:55:07.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 19 02:55:07.240: INFO: Waiting up to 5m0s for pod "pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78" in namespace "emptydir-4440" to be "success or failure"
Aug 19 02:55:07.324: INFO: Pod "pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78": Phase="Pending", Reason="", readiness=false. Elapsed: 83.146382ms
Aug 19 02:55:09.780: INFO: Pod "pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539106849s
Aug 19 02:55:11.786: INFO: Pod "pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545125704s
Aug 19 02:55:13.793: INFO: Pod "pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.552273318s
STEP: Saw pod success
Aug 19 02:55:13.793: INFO: Pod "pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78" satisfied condition "success or failure"
Aug 19 02:55:13.799: INFO: Trying to get logs from node iruya-worker2 pod pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78 container test-container: 
STEP: delete the pod
Aug 19 02:55:14.077: INFO: Waiting for pod pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78 to disappear
Aug 19 02:55:14.277: INFO: Pod pod-7cb15cc6-9395-4db1-ae1a-8042d7a39b78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:55:14.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4440" for this suite.
Aug 19 02:55:20.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:55:20.621: INFO: namespace emptydir-4440 deletion completed in 6.285470898s

• [SLOW TEST:13.565 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:55:20.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 19 02:55:21.249: INFO: Waiting up to 5m0s for pod "pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95" in namespace "emptydir-4730" to be "success or failure"
Aug 19 02:55:21.283: INFO: Pod "pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95": Phase="Pending", Reason="", readiness=false. Elapsed: 33.666358ms
Aug 19 02:55:23.349: INFO: Pod "pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099702225s
Aug 19 02:55:25.403: INFO: Pod "pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15381395s
Aug 19 02:55:27.411: INFO: Pod "pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161081321s
STEP: Saw pod success
Aug 19 02:55:27.411: INFO: Pod "pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95" satisfied condition "success or failure"
Aug 19 02:55:27.415: INFO: Trying to get logs from node iruya-worker pod pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95 container test-container: 
STEP: delete the pod
Aug 19 02:55:27.819: INFO: Waiting for pod pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95 to disappear
Aug 19 02:55:28.037: INFO: Pod pod-ac79c671-dbc4-4172-977c-e04c8e0f8f95 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:55:28.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4730" for this suite.
Aug 19 02:55:34.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:55:34.185: INFO: namespace emptydir-4730 deletion completed in 6.139978035s

• [SLOW TEST:13.560 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:55:34.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 19 02:55:34.246: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 19 02:55:34.298: INFO: Waiting for terminating namespaces to be deleted...
Aug 19 02:55:34.301: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 19 02:55:34.314: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:55:34.314: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 02:55:34.314: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:55:34.314: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 02:55:34.314: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 19 02:55:34.326: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:55:34.326: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 02:55:34.326: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 02:55:34.326: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-af727f18-24ab-430b-9020-1f90109bf4b5 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-af727f18-24ab-430b-9020-1f90109bf4b5 off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-af727f18-24ab-430b-9020-1f90109bf4b5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:55:42.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3300" for this suite.
Aug 19 02:55:54.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:55:55.057: INFO: namespace sched-pred-3300 deletion completed in 12.306369806s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:20.870 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:55:55.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8a6da332-0254-406d-b800-6e8f0df26a5b
STEP: Creating a pod to test consume configMaps
Aug 19 02:55:55.662: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256" in namespace "projected-7895" to be "success or failure"
Aug 19 02:55:55.716: INFO: Pod "pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256": Phase="Pending", Reason="", readiness=false. Elapsed: 53.512511ms
Aug 19 02:55:57.721: INFO: Pod "pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059190524s
Aug 19 02:55:59.727: INFO: Pod "pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256": Phase="Running", Reason="", readiness=true. Elapsed: 4.064543707s
Aug 19 02:56:01.731: INFO: Pod "pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06859204s
STEP: Saw pod success
Aug 19 02:56:01.731: INFO: Pod "pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256" satisfied condition "success or failure"
Aug 19 02:56:01.776: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 02:56:01.919: INFO: Waiting for pod pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256 to disappear
Aug 19 02:56:01.947: INFO: Pod pod-projected-configmaps-323f7d9a-e5b3-4a76-959c-6ad4d5494256 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:56:01.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7895" for this suite.
Aug 19 02:56:07.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:56:08.092: INFO: namespace projected-7895 deletion completed in 6.139303043s

• [SLOW TEST:13.033 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:56:08.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug 19 02:56:08.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8931'
Aug 19 02:56:09.924: INFO: stderr: ""
Aug 19 02:56:09.925: INFO: stdout: "pod/pause created\n"
Aug 19 02:56:09.925: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 19 02:56:09.925: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8931" to be "running and ready"
Aug 19 02:56:09.937: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.885337ms
Aug 19 02:56:12.001: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075674062s
Aug 19 02:56:14.181: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.254994593s
Aug 19 02:56:14.181: INFO: Pod "pause" satisfied condition "running and ready"
Aug 19 02:56:14.181: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 19 02:56:14.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8931'
Aug 19 02:56:21.850: INFO: stderr: ""
Aug 19 02:56:21.850: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 19 02:56:21.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8931'
Aug 19 02:56:22.938: INFO: stderr: ""
Aug 19 02:56:22.938: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 19 02:56:22.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8931'
Aug 19 02:56:24.457: INFO: stderr: ""
Aug 19 02:56:24.457: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 19 02:56:24.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8931'
Aug 19 02:56:25.548: INFO: stderr: ""
Aug 19 02:56:25.548: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          16s   \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug 19 02:56:25.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8931'
Aug 19 02:56:26.700: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 02:56:26.700: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 19 02:56:26.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8931'
Aug 19 02:56:27.798: INFO: stderr: "No resources found.\n"
Aug 19 02:56:27.798: INFO: stdout: ""
Aug 19 02:56:27.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8931 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 02:56:28.895: INFO: stderr: ""
Aug 19 02:56:28.895: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:56:28.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8931" for this suite.
Aug 19 02:56:35.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:56:35.411: INFO: namespace kubectl-8931 deletion completed in 6.50722393s

• [SLOW TEST:27.319 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:56:35.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 02:56:35.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda" in namespace "projected-7195" to be "success or failure"
Aug 19 02:56:35.690: INFO: Pod "downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120207ms
Aug 19 02:56:37.770: INFO: Pod "downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08984443s
Aug 19 02:56:39.775: INFO: Pod "downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095255526s
Aug 19 02:56:41.782: INFO: Pod "downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102050318s
STEP: Saw pod success
Aug 19 02:56:41.782: INFO: Pod "downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda" satisfied condition "success or failure"
Aug 19 02:56:42.074: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda container client-container: 
STEP: delete the pod
Aug 19 02:56:42.292: INFO: Waiting for pod downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda to disappear
Aug 19 02:56:42.379: INFO: Pod downwardapi-volume-4ccaafaa-5904-4f42-8b7f-5c32d1e15eda no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:56:42.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7195" for this suite.
Aug 19 02:56:48.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:56:48.788: INFO: namespace projected-7195 deletion completed in 6.401254965s

• [SLOW TEST:13.374 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:56:48.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 02:56:54.388: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:56:54.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7349" for this suite.
Aug 19 02:57:00.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:57:01.003: INFO: namespace container-runtime-7349 deletion completed in 6.378327229s

• [SLOW TEST:12.211 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:57:01.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 02:57:01.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774" in namespace "projected-6007" to be "success or failure"
Aug 19 02:57:01.313: INFO: Pod "downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774": Phase="Pending", Reason="", readiness=false. Elapsed: 123.742507ms
Aug 19 02:57:03.655: INFO: Pod "downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774": Phase="Pending", Reason="", readiness=false. Elapsed: 2.465645255s
Aug 19 02:57:05.661: INFO: Pod "downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774": Phase="Running", Reason="", readiness=true. Elapsed: 4.471236714s
Aug 19 02:57:07.665: INFO: Pod "downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.475563484s
STEP: Saw pod success
Aug 19 02:57:07.665: INFO: Pod "downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774" satisfied condition "success or failure"
Aug 19 02:57:07.800: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774 container client-container: 
STEP: delete the pod
Aug 19 02:57:08.006: INFO: Waiting for pod downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774 to disappear
Aug 19 02:57:08.105: INFO: Pod downwardapi-volume-d6786f9e-b103-4747-b30b-ccbb7f0de774 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 02:57:08.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6007" for this suite.
Aug 19 02:57:14.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 02:57:14.244: INFO: namespace projected-6007 deletion completed in 6.127476722s

• [SLOW TEST:13.240 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 02:57:14.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-8abb33a1-0e7f-4331-90e5-54d3f1a6d159 in namespace container-probe-6381
Aug 19 02:57:22.466: INFO: Started pod busybox-8abb33a1-0e7f-4331-90e5-54d3f1a6d159 in namespace container-probe-6381
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 02:57:22.472: INFO: Initial restart count of pod busybox-8abb33a1-0e7f-4331-90e5-54d3f1a6d159 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:01:23.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6381" for this suite.
Aug 19 03:01:29.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:01:30.013: INFO: namespace container-probe-6381 deletion completed in 6.17669983s

• [SLOW TEST:255.768 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:01:30.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 19 03:01:30.184: INFO: Waiting up to 5m0s for pod "pod-92380155-85cf-4041-b95d-d14878e8964b" in namespace "emptydir-6566" to be "success or failure"
Aug 19 03:01:30.189: INFO: Pod "pod-92380155-85cf-4041-b95d-d14878e8964b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.540589ms
Aug 19 03:01:32.195: INFO: Pod "pod-92380155-85cf-4041-b95d-d14878e8964b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011234861s
Aug 19 03:01:34.201: INFO: Pod "pod-92380155-85cf-4041-b95d-d14878e8964b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017636578s
STEP: Saw pod success
Aug 19 03:01:34.202: INFO: Pod "pod-92380155-85cf-4041-b95d-d14878e8964b" satisfied condition "success or failure"
Aug 19 03:01:34.206: INFO: Trying to get logs from node iruya-worker pod pod-92380155-85cf-4041-b95d-d14878e8964b container test-container: 
STEP: delete the pod
Aug 19 03:01:34.337: INFO: Waiting for pod pod-92380155-85cf-4041-b95d-d14878e8964b to disappear
Aug 19 03:01:34.387: INFO: Pod pod-92380155-85cf-4041-b95d-d14878e8964b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:01:34.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6566" for this suite.
Aug 19 03:01:40.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:01:40.558: INFO: namespace emptydir-6566 deletion completed in 6.159657746s

• [SLOW TEST:10.543 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:01:40.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:01:40.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354" in namespace "downward-api-4906" to be "success or failure"
Aug 19 03:01:40.765: INFO: Pod "downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354": Phase="Pending", Reason="", readiness=false. Elapsed: 26.680466ms
Aug 19 03:01:42.863: INFO: Pod "downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123922672s
Aug 19 03:01:44.868: INFO: Pod "downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129198541s
STEP: Saw pod success
Aug 19 03:01:44.868: INFO: Pod "downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354" satisfied condition "success or failure"
Aug 19 03:01:44.872: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354 container client-container: 
STEP: delete the pod
Aug 19 03:01:45.174: INFO: Waiting for pod downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354 to disappear
Aug 19 03:01:45.240: INFO: Pod downwardapi-volume-53d62621-2c01-40f4-8a98-07ccb6ea2354 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:01:45.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4906" for this suite.
Aug 19 03:01:51.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:01:51.383: INFO: namespace downward-api-4906 deletion completed in 6.130642882s

• [SLOW TEST:10.824 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:01:51.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug 19 03:01:51.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 19 03:01:52.557: INFO: stderr: ""
Aug 19 03:01:52.557: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:01:52.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5535" for this suite.
Aug 19 03:01:58.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:01:58.706: INFO: namespace kubectl-5535 deletion completed in 6.137954104s

• [SLOW TEST:7.321 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:01:58.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-fd42b9ce-b623-44c5-ba1e-beae6fd98d25
STEP: Creating a pod to test consume secrets
Aug 19 03:01:58.845: INFO: Waiting up to 5m0s for pod "pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405" in namespace "secrets-5873" to be "success or failure"
Aug 19 03:01:58.850: INFO: Pod "pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405": Phase="Pending", Reason="", readiness=false. Elapsed: 4.99294ms
Aug 19 03:02:00.882: INFO: Pod "pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03670334s
Aug 19 03:02:02.888: INFO: Pod "pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042910705s
STEP: Saw pod success
Aug 19 03:02:02.888: INFO: Pod "pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405" satisfied condition "success or failure"
Aug 19 03:02:02.893: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405 container secret-volume-test: 
STEP: delete the pod
Aug 19 03:02:02.925: INFO: Waiting for pod pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405 to disappear
Aug 19 03:02:02.964: INFO: Pod pod-secrets-a6c173d9-467a-4c6e-864e-ef642c1bd405 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:02:02.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5873" for this suite.
Aug 19 03:02:09.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:02:09.206: INFO: namespace secrets-5873 deletion completed in 6.231047082s

• [SLOW TEST:10.498 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:02:09.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b3f879a1-65fc-4db5-93bc-dd78e07cfeaa
STEP: Creating a pod to test consume configMaps
Aug 19 03:02:09.560: INFO: Waiting up to 5m0s for pod "pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b" in namespace "configmap-4099" to be "success or failure"
Aug 19 03:02:09.641: INFO: Pod "pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b": Phase="Pending", Reason="", readiness=false. Elapsed: 81.528725ms
Aug 19 03:02:11.649: INFO: Pod "pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088874573s
Aug 19 03:02:13.655: INFO: Pod "pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095209047s
STEP: Saw pod success
Aug 19 03:02:13.655: INFO: Pod "pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b" satisfied condition "success or failure"
Aug 19 03:02:13.659: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b container configmap-volume-test: 
STEP: delete the pod
Aug 19 03:02:13.841: INFO: Waiting for pod pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b to disappear
Aug 19 03:02:14.113: INFO: Pod pod-configmaps-744c300b-58d3-4005-aad2-0788dba4305b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:02:14.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4099" for this suite.
Aug 19 03:02:20.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:02:20.331: INFO: namespace configmap-4099 deletion completed in 6.195255312s

• [SLOW TEST:11.122 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:02:20.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:02:20.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3367" for this suite.
Aug 19 03:02:26.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:02:26.565: INFO: namespace services-3367 deletion completed in 6.123102258s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.233 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:02:26.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 19 03:02:43.067: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:43.471: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 03:02:45.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:45.480: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 03:02:47.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:47.478: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 03:02:49.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:49.477: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 03:02:51.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:51.478: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 03:02:53.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:53.479: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 03:02:55.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 03:02:55.614: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:02:55.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7113" for this suite.
Aug 19 03:03:20.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:03:20.367: INFO: namespace container-lifecycle-hook-7113 deletion completed in 24.262213833s

• [SLOW TEST:53.797 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:03:20.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:03:48.567: INFO: Container started at 2020-08-19 03:03:25 +0000 UTC, pod became ready at 2020-08-19 03:03:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:03:48.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9061" for this suite.
Aug 19 03:04:10.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:04:10.906: INFO: namespace container-probe-9061 deletion completed in 22.327556277s

• [SLOW TEST:50.538 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:04:10.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:04:11.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75" in namespace "projected-7133" to be "success or failure"
Aug 19 03:04:11.124: INFO: Pod "downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75": Phase="Pending", Reason="", readiness=false. Elapsed: 29.408565ms
Aug 19 03:04:13.130: INFO: Pod "downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035445233s
Aug 19 03:04:15.136: INFO: Pod "downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04062113s
Aug 19 03:04:17.249: INFO: Pod "downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1542634s
STEP: Saw pod success
Aug 19 03:04:17.249: INFO: Pod "downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75" satisfied condition "success or failure"
Aug 19 03:04:17.350: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75 container client-container: 
STEP: delete the pod
Aug 19 03:04:17.505: INFO: Waiting for pod downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75 to disappear
Aug 19 03:04:17.889: INFO: Pod downwardapi-volume-3ea3cb8c-7adb-4065-9470-e8f715d49e75 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:04:17.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7133" for this suite.
Aug 19 03:04:23.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:04:24.104: INFO: namespace projected-7133 deletion completed in 6.204601541s

• [SLOW TEST:13.196 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:04:24.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 03:04:30.872: INFO: Successfully updated pod "annotationupdatef35a1fa4-89b8-4a81-a685-9b21c4d6f13e"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:04:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9419" for this suite.
Aug 19 03:04:59.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:05:00.032: INFO: namespace downward-api-9419 deletion completed in 25.079116995s

• [SLOW TEST:35.926 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:05:00.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-5f8d9260-478a-4d5f-a602-e9170700a9c5
STEP: Creating a pod to test consume configMaps
Aug 19 03:05:01.073: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952" in namespace "projected-5498" to be "success or failure"
Aug 19 03:05:01.394: INFO: Pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952": Phase="Pending", Reason="", readiness=false. Elapsed: 320.668347ms
Aug 19 03:05:03.405: INFO: Pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331256609s
Aug 19 03:05:05.412: INFO: Pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33859196s
Aug 19 03:05:07.419: INFO: Pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952": Phase="Running", Reason="", readiness=true. Elapsed: 6.345456052s
Aug 19 03:05:09.436: INFO: Pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.362064146s
STEP: Saw pod success
Aug 19 03:05:09.436: INFO: Pod "pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952" satisfied condition "success or failure"
Aug 19 03:05:09.441: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 03:05:09.464: INFO: Waiting for pod pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952 to disappear
Aug 19 03:05:09.468: INFO: Pod pod-projected-configmaps-f63944bf-063d-42dc-8da8-5dd66ca7f952 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:05:09.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5498" for this suite.
Aug 19 03:05:15.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:05:15.693: INFO: namespace projected-5498 deletion completed in 6.216267068s

• [SLOW TEST:15.655 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:05:15.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 19 03:05:15.748: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:05:16.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5367" for this suite.
Aug 19 03:05:22.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:05:23.014: INFO: namespace kubectl-5367 deletion completed in 6.150365676s

• [SLOW TEST:7.320 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:05:23.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 19 03:05:23.611: INFO: Waiting up to 5m0s for pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e" in namespace "emptydir-3614" to be "success or failure"
Aug 19 03:05:23.658: INFO: Pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.625849ms
Aug 19 03:05:25.717: INFO: Pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105979926s
Aug 19 03:05:27.732: INFO: Pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120923048s
Aug 19 03:05:29.743: INFO: Pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e": Phase="Running", Reason="", readiness=true. Elapsed: 6.131596753s
Aug 19 03:05:31.749: INFO: Pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138022461s
STEP: Saw pod success
Aug 19 03:05:31.749: INFO: Pod "pod-e1c45030-8bee-498c-a7b0-6f26996b145e" satisfied condition "success or failure"
Aug 19 03:05:31.753: INFO: Trying to get logs from node iruya-worker2 pod pod-e1c45030-8bee-498c-a7b0-6f26996b145e container test-container: 
STEP: delete the pod
Aug 19 03:05:31.808: INFO: Waiting for pod pod-e1c45030-8bee-498c-a7b0-6f26996b145e to disappear
Aug 19 03:05:31.817: INFO: Pod pod-e1c45030-8bee-498c-a7b0-6f26996b145e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:05:31.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3614" for this suite.
Aug 19 03:05:37.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:05:38.009: INFO: namespace emptydir-3614 deletion completed in 6.182105752s

• [SLOW TEST:14.993 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:05:38.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d3621437-a44d-47de-b659-fbda58d3e756
STEP: Creating a pod to test consume secrets
Aug 19 03:05:38.139: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a" in namespace "projected-6922" to be "success or failure"
Aug 19 03:05:38.160: INFO: Pod "pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.034474ms
Aug 19 03:05:40.166: INFO: Pod "pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026817112s
Aug 19 03:05:42.181: INFO: Pod "pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a": Phase="Running", Reason="", readiness=true. Elapsed: 4.041589401s
Aug 19 03:05:44.189: INFO: Pod "pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049568558s
STEP: Saw pod success
Aug 19 03:05:44.189: INFO: Pod "pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a" satisfied condition "success or failure"
Aug 19 03:05:44.194: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 03:05:44.237: INFO: Waiting for pod pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a to disappear
Aug 19 03:05:44.248: INFO: Pod pod-projected-secrets-da889213-3f2f-4e35-a5b6-8664ea44169a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:05:44.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6922" for this suite.
Aug 19 03:05:50.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:05:50.423: INFO: namespace projected-6922 deletion completed in 6.167393384s

• [SLOW TEST:12.410 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:05:50.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:05:50.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72" in namespace "projected-6825" to be "success or failure"
Aug 19 03:05:50.542: INFO: Pod "downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72": Phase="Pending", Reason="", readiness=false. Elapsed: 5.799372ms
Aug 19 03:05:52.549: INFO: Pod "downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012734815s
Aug 19 03:05:54.556: INFO: Pod "downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019774854s
Aug 19 03:05:56.570: INFO: Pod "downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033059449s
STEP: Saw pod success
Aug 19 03:05:56.570: INFO: Pod "downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72" satisfied condition "success or failure"
Aug 19 03:05:56.773: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72 container client-container: 
STEP: delete the pod
Aug 19 03:05:56.837: INFO: Waiting for pod downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72 to disappear
Aug 19 03:05:56.987: INFO: Pod downwardapi-volume-bf31f6df-8eab-4671-bd54-663f65547a72 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:05:56.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6825" for this suite.
Aug 19 03:06:03.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:06:03.162: INFO: namespace projected-6825 deletion completed in 6.164189776s

• [SLOW TEST:12.738 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:06:03.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:06:03.638: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 19 03:06:03.663: INFO: Number of nodes with available pods: 0
Aug 19 03:06:03.663: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 19 03:06:03.795: INFO: Number of nodes with available pods: 0
Aug 19 03:06:03.795: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:04.851: INFO: Number of nodes with available pods: 0
Aug 19 03:06:04.851: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:05.801: INFO: Number of nodes with available pods: 0
Aug 19 03:06:05.801: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:07.119: INFO: Number of nodes with available pods: 0
Aug 19 03:06:07.119: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:07.802: INFO: Number of nodes with available pods: 0
Aug 19 03:06:07.803: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:08.804: INFO: Number of nodes with available pods: 0
Aug 19 03:06:08.804: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:09.936: INFO: Number of nodes with available pods: 0
Aug 19 03:06:09.936: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:10.870: INFO: Number of nodes with available pods: 0
Aug 19 03:06:10.870: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:11.803: INFO: Number of nodes with available pods: 1
Aug 19 03:06:11.803: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 19 03:06:11.977: INFO: Number of nodes with available pods: 1
Aug 19 03:06:11.977: INFO: Number of running nodes: 0, number of available pods: 1
Aug 19 03:06:13.138: INFO: Number of nodes with available pods: 0
Aug 19 03:06:13.138: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 19 03:06:13.383: INFO: Number of nodes with available pods: 0
Aug 19 03:06:13.383: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:14.389: INFO: Number of nodes with available pods: 0
Aug 19 03:06:14.389: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:15.391: INFO: Number of nodes with available pods: 0
Aug 19 03:06:15.391: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:16.391: INFO: Number of nodes with available pods: 0
Aug 19 03:06:16.391: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:17.527: INFO: Number of nodes with available pods: 0
Aug 19 03:06:17.527: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:18.401: INFO: Number of nodes with available pods: 0
Aug 19 03:06:18.401: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:19.390: INFO: Number of nodes with available pods: 0
Aug 19 03:06:19.390: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:06:20.395: INFO: Number of nodes with available pods: 1
Aug 19 03:06:20.395: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3185, will wait for the garbage collector to delete the pods
Aug 19 03:06:20.465: INFO: Deleting DaemonSet.extensions daemon-set took: 8.565509ms
Aug 19 03:06:20.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.792638ms
Aug 19 03:06:25.870: INFO: Number of nodes with available pods: 0
Aug 19 03:06:25.870: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 03:06:25.874: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3185/daemonsets","resourceVersion":"969610"},"items":null}

Aug 19 03:06:25.975: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3185/pods","resourceVersion":"969610"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:06:26.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3185" for this suite.
Aug 19 03:06:34.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:06:34.623: INFO: namespace daemonsets-3185 deletion completed in 8.370334472s

• [SLOW TEST:31.460 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:06:34.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug 19 03:06:35.019: INFO: Waiting up to 5m0s for pod "client-containers-d138adab-2c35-4d69-afab-f245d2e2c503" in namespace "containers-9850" to be "success or failure"
Aug 19 03:06:35.072: INFO: Pod "client-containers-d138adab-2c35-4d69-afab-f245d2e2c503": Phase="Pending", Reason="", readiness=false. Elapsed: 52.28456ms
Aug 19 03:06:37.204: INFO: Pod "client-containers-d138adab-2c35-4d69-afab-f245d2e2c503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185119301s
Aug 19 03:06:39.353: INFO: Pod "client-containers-d138adab-2c35-4d69-afab-f245d2e2c503": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334013892s
Aug 19 03:06:41.361: INFO: Pod "client-containers-d138adab-2c35-4d69-afab-f245d2e2c503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.341733359s
STEP: Saw pod success
Aug 19 03:06:41.361: INFO: Pod "client-containers-d138adab-2c35-4d69-afab-f245d2e2c503" satisfied condition "success or failure"
Aug 19 03:06:41.366: INFO: Trying to get logs from node iruya-worker2 pod client-containers-d138adab-2c35-4d69-afab-f245d2e2c503 container test-container: 
STEP: delete the pod
Aug 19 03:06:41.451: INFO: Waiting for pod client-containers-d138adab-2c35-4d69-afab-f245d2e2c503 to disappear
Aug 19 03:06:41.747: INFO: Pod client-containers-d138adab-2c35-4d69-afab-f245d2e2c503 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:06:41.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9850" for this suite.
Aug 19 03:06:47.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:06:47.923: INFO: namespace containers-9850 deletion completed in 6.168454133s

• [SLOW TEST:13.296 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:06:47.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 19 03:06:52.115: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 19 03:07:08.573: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:07:08.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-442" for this suite.
Aug 19 03:07:14.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:07:15.321: INFO: namespace pods-442 deletion completed in 6.727794021s

• [SLOW TEST:27.396 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:07:15.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-fc4c1619-024c-4639-812c-b6a573981135 in namespace container-probe-9989
Aug 19 03:07:19.499: INFO: Started pod liveness-fc4c1619-024c-4639-812c-b6a573981135 in namespace container-probe-9989
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 03:07:19.564: INFO: Initial restart count of pod liveness-fc4c1619-024c-4639-812c-b6a573981135 is 0
Aug 19 03:07:40.108: INFO: Restart count of pod container-probe-9989/liveness-fc4c1619-024c-4639-812c-b6a573981135 is now 1 (20.544558418s elapsed)
Aug 19 03:08:00.178: INFO: Restart count of pod container-probe-9989/liveness-fc4c1619-024c-4639-812c-b6a573981135 is now 2 (40.614497897s elapsed)
Aug 19 03:08:20.246: INFO: Restart count of pod container-probe-9989/liveness-fc4c1619-024c-4639-812c-b6a573981135 is now 3 (1m0.682032952s elapsed)
Aug 19 03:08:42.519: INFO: Restart count of pod container-probe-9989/liveness-fc4c1619-024c-4639-812c-b6a573981135 is now 4 (1m22.955387852s elapsed)
Aug 19 03:09:45.169: INFO: Restart count of pod container-probe-9989/liveness-fc4c1619-024c-4639-812c-b6a573981135 is now 5 (2m25.60539793s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:09:45.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9989" for this suite.
Aug 19 03:09:53.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:09:53.632: INFO: namespace container-probe-9989 deletion completed in 8.273890634s

• [SLOW TEST:158.309 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:09:53.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6596
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6596
STEP: Deleting pre-stop pod
Aug 19 03:10:06.895: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:10:06.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6596" for this suite.
Aug 19 03:10:46.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:10:46.654: INFO: namespace prestop-6596 deletion completed in 38.976079132s

• [SLOW TEST:53.019 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:10:46.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 03:10:58.637: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:10:58.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4740" for this suite.
Aug 19 03:11:05.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:11:05.197: INFO: namespace container-runtime-4740 deletion completed in 6.172953254s

• [SLOW TEST:18.540 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:11:05.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-4883/secret-test-1b615bd1-f0e4-451a-87c1-c8fc0eb686c3
STEP: Creating a pod to test consume secrets
Aug 19 03:11:05.341: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3" in namespace "secrets-4883" to be "success or failure"
Aug 19 03:11:05.399: INFO: Pod "pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3": Phase="Pending", Reason="", readiness=false. Elapsed: 57.811199ms
Aug 19 03:11:07.406: INFO: Pod "pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065308199s
Aug 19 03:11:09.447: INFO: Pod "pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106133484s
Aug 19 03:11:11.453: INFO: Pod "pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11231389s
STEP: Saw pod success
Aug 19 03:11:11.453: INFO: Pod "pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3" satisfied condition "success or failure"
Aug 19 03:11:11.458: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3 container env-test: 
STEP: delete the pod
Aug 19 03:11:11.616: INFO: Waiting for pod pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3 to disappear
Aug 19 03:11:11.651: INFO: Pod pod-configmaps-9ff5410b-af1b-44ed-b365-28a65eca97f3 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:11:11.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4883" for this suite.
Aug 19 03:11:17.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:11:17.972: INFO: namespace secrets-4883 deletion completed in 6.305451846s

• [SLOW TEST:12.773 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:11:17.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 19 03:11:18.188: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5163,SelfLink:/api/v1/namespaces/watch-5163/configmaps/e2e-watch-test-watch-closed,UID:8aa72830-d03a-4cda-9dad-0d14ef098898,ResourceVersion:970390,Generation:0,CreationTimestamp:2020-08-19 03:11:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 03:11:18.189: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5163,SelfLink:/api/v1/namespaces/watch-5163/configmaps/e2e-watch-test-watch-closed,UID:8aa72830-d03a-4cda-9dad-0d14ef098898,ResourceVersion:970391,Generation:0,CreationTimestamp:2020-08-19 03:11:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 19 03:11:18.275: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5163,SelfLink:/api/v1/namespaces/watch-5163/configmaps/e2e-watch-test-watch-closed,UID:8aa72830-d03a-4cda-9dad-0d14ef098898,ResourceVersion:970393,Generation:0,CreationTimestamp:2020-08-19 03:11:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 03:11:18.277: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5163,SelfLink:/api/v1/namespaces/watch-5163/configmaps/e2e-watch-test-watch-closed,UID:8aa72830-d03a-4cda-9dad-0d14ef098898,ResourceVersion:970394,Generation:0,CreationTimestamp:2020-08-19 03:11:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:11:18.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5163" for this suite.
Aug 19 03:11:24.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:11:24.546: INFO: namespace watch-5163 deletion completed in 6.243076033s

• [SLOW TEST:6.570 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:11:24.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 19 03:11:24.683: INFO: Waiting up to 5m0s for pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92" in namespace "var-expansion-1524" to be "success or failure"
Aug 19 03:11:24.758: INFO: Pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92": Phase="Pending", Reason="", readiness=false. Elapsed: 75.067202ms
Aug 19 03:11:26.765: INFO: Pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081598429s
Aug 19 03:11:28.771: INFO: Pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087573823s
Aug 19 03:11:30.777: INFO: Pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92": Phase="Running", Reason="", readiness=true. Elapsed: 6.093837286s
Aug 19 03:11:32.800: INFO: Pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116433604s
STEP: Saw pod success
Aug 19 03:11:32.800: INFO: Pod "var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92" satisfied condition "success or failure"
Aug 19 03:11:32.804: INFO: Trying to get logs from node iruya-worker pod var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92 container dapi-container: 
STEP: delete the pod
Aug 19 03:11:32.869: INFO: Waiting for pod var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92 to disappear
Aug 19 03:11:32.877: INFO: Pod var-expansion-6a847640-46ed-44c0-8d21-ffc555f49c92 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:11:32.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1524" for this suite.
Aug 19 03:11:38.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:11:39.071: INFO: namespace var-expansion-1524 deletion completed in 6.186429992s

• [SLOW TEST:14.524 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:11:39.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 19 03:11:39.182: INFO: Waiting up to 5m0s for pod "pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c" in namespace "emptydir-2166" to be "success or failure"
Aug 19 03:11:39.225: INFO: Pod "pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.892328ms
Aug 19 03:11:41.232: INFO: Pod "pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048882538s
Aug 19 03:11:43.237: INFO: Pod "pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054031663s
Aug 19 03:11:45.244: INFO: Pod "pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060845481s
STEP: Saw pod success
Aug 19 03:11:45.244: INFO: Pod "pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c" satisfied condition "success or failure"
Aug 19 03:11:45.249: INFO: Trying to get logs from node iruya-worker2 pod pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c container test-container: 
STEP: delete the pod
Aug 19 03:11:45.329: INFO: Waiting for pod pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c to disappear
Aug 19 03:11:45.338: INFO: Pod pod-07cd5b66-1044-43ae-8753-a0b5e5a0516c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:11:45.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2166" for this suite.
Aug 19 03:11:51.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:11:51.488: INFO: namespace emptydir-2166 deletion completed in 6.14218383s

• [SLOW TEST:12.416 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:11:51.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f3142ac9-71c0-4451-8784-606afb8557b6
STEP: Creating a pod to test consume configMaps
Aug 19 03:11:51.898: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32" in namespace "projected-6133" to be "success or failure"
Aug 19 03:11:51.934: INFO: Pod "pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32": Phase="Pending", Reason="", readiness=false. Elapsed: 35.438376ms
Aug 19 03:11:53.944: INFO: Pod "pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045853779s
Aug 19 03:11:55.951: INFO: Pod "pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053053423s
Aug 19 03:11:57.959: INFO: Pod "pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061038097s
STEP: Saw pod success
Aug 19 03:11:57.960: INFO: Pod "pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32" satisfied condition "success or failure"
Aug 19 03:11:57.964: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 03:11:58.022: INFO: Waiting for pod pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32 to disappear
Aug 19 03:11:58.026: INFO: Pod pod-projected-configmaps-0b047f1e-33b4-474f-8503-a2699c1f0a32 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:11:58.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6133" for this suite.
Aug 19 03:12:04.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:12:04.178: INFO: namespace projected-6133 deletion completed in 6.14068552s

• [SLOW TEST:12.687 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:12:04.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 03:12:04.404: INFO: PodSpec: initContainers in spec.initContainers
Aug 19 03:13:02.137: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9f7a704f-a6f7-4349-a153-aad52c95f426", GenerateName:"", Namespace:"init-container-4493", SelfLink:"/api/v1/namespaces/init-container-4493/pods/pod-init-9f7a704f-a6f7-4349-a153-aad52c95f426", UID:"b423ae6b-97f6-43d0-9757-bfd0d2b594bd", ResourceVersion:"970701", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733403524, loc:(*time.Location)(0x67985e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"403402967"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ks58j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x7ab8fe0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ks58j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ks58j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ks58j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x8f37428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x68f1b30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x8f374b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x8f374d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x8f374d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x8f374dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733403524, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733403524, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733403524, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733403524, loc:(*time.Location)(0x67985e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.99", StartTime:(*v1.Time)(0x7ab90c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x7ab90e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x82dea50)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://aa4cc9a4d2760c76f732bd218f21dbfd154e255ef427755d7d34a79f73a3fdcb"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x8fa7eb0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x8fa7ea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:13:02.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4493" for this suite.
Aug 19 03:13:24.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:13:24.436: INFO: namespace init-container-4493 deletion completed in 22.238308588s

• [SLOW TEST:80.257 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:13:24.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 19 03:13:24.583: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 19 03:13:24.618: INFO: Waiting for terminating namespaces to be deleted...
Aug 19 03:13:24.623: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 19 03:13:24.636: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 03:13:24.637: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 03:13:24.637: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 03:13:24.637: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 03:13:24.637: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 19 03:13:24.679: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 03:13:24.679: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 03:13:24.679: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 03:13:24.679: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162c8cd2c65999d7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:13:25.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4158" for this suite.
Aug 19 03:13:33.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:13:33.934: INFO: namespace sched-pred-4158 deletion completed in 8.203101873s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:9.494 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:13:33.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:13:42.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3828" for this suite.
Aug 19 03:13:54.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:13:54.854: INFO: namespace kubelet-test-3828 deletion completed in 12.745846232s

• [SLOW TEST:20.917 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:13:54.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:14:52.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8763" for this suite.
Aug 19 03:15:00.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:15:00.815: INFO: namespace container-runtime-8763 deletion completed in 8.710555002s

• [SLOW TEST:65.958 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:15:00.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-6620972d-55ee-4888-b2f2-9b97f14878fd
STEP: Creating configMap with name cm-test-opt-upd-26244dbd-3dec-4b95-bf7e-1edbd5869958
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6620972d-55ee-4888-b2f2-9b97f14878fd
STEP: Updating configmap cm-test-opt-upd-26244dbd-3dec-4b95-bf7e-1edbd5869958
STEP: Creating configMap with name cm-test-opt-create-c3865982-5823-4a7e-b613-f828d41df3af
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:15:09.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6220" for this suite.
Aug 19 03:15:31.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:15:31.828: INFO: namespace configmap-6220 deletion completed in 22.135046848s

• [SLOW TEST:31.013 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:15:31.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug 19 03:15:31.933: INFO: Waiting up to 5m0s for pod "var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3" in namespace "var-expansion-3019" to be "success or failure"
Aug 19 03:15:31.950: INFO: Pod "var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.735851ms
Aug 19 03:15:33.961: INFO: Pod "var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027741946s
Aug 19 03:15:35.966: INFO: Pod "var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033232823s
STEP: Saw pod success
Aug 19 03:15:35.967: INFO: Pod "var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3" satisfied condition "success or failure"
Aug 19 03:15:35.972: INFO: Trying to get logs from node iruya-worker pod var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3 container dapi-container: 
STEP: delete the pod
Aug 19 03:15:37.035: INFO: Waiting for pod var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3 to disappear
Aug 19 03:15:37.224: INFO: Pod var-expansion-c5d1565e-7b71-4ae7-a778-e5037f4968b3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:15:37.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3019" for this suite.
Aug 19 03:15:43.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:15:43.628: INFO: namespace var-expansion-3019 deletion completed in 6.392323448s

• [SLOW TEST:11.795 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:15:43.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug 19 03:15:43.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 19 03:15:48.035: INFO: stderr: ""
Aug 19 03:15:48.035: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:15:48.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-945" for this suite.
Aug 19 03:15:54.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:15:54.163: INFO: namespace kubectl-945 deletion completed in 6.119072382s

• [SLOW TEST:10.533 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:15:54.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:15:54.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3" in namespace "downward-api-1207" to be "success or failure"
Aug 19 03:15:54.282: INFO: Pod "downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.759073ms
Aug 19 03:15:56.287: INFO: Pod "downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032113113s
Aug 19 03:15:58.294: INFO: Pod "downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3": Phase="Running", Reason="", readiness=true. Elapsed: 4.038996896s
Aug 19 03:16:00.299: INFO: Pod "downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044281045s
STEP: Saw pod success
Aug 19 03:16:00.299: INFO: Pod "downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3" satisfied condition "success or failure"
Aug 19 03:16:00.304: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3 container client-container: 
STEP: delete the pod
Aug 19 03:16:00.338: INFO: Waiting for pod downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3 to disappear
Aug 19 03:16:00.499: INFO: Pod downwardapi-volume-85261c93-f668-4ac4-b702-b46e8dd3e4b3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:16:00.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1207" for this suite.
Aug 19 03:16:06.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:16:06.732: INFO: namespace downward-api-1207 deletion completed in 6.224043957s

• [SLOW TEST:12.569 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:16:06.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9283, will wait for the garbage collector to delete the pods
Aug 19 03:16:12.876: INFO: Deleting Job.batch foo took: 5.295319ms
Aug 19 03:16:13.176: INFO: Terminating Job.batch foo pods took: 300.615593ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:16:46.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9283" for this suite.
Aug 19 03:16:52.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:16:52.716: INFO: namespace job-9283 deletion completed in 6.125340578s

• [SLOW TEST:45.982 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:16:52.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1612
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-1612
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1612
Aug 19 03:16:52.840: INFO: Found 0 stateful pods, waiting for 1
Aug 19 03:17:02.847: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 19 03:17:02.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 03:17:04.220: INFO: stderr: "I0819 03:17:04.096357    1901 log.go:172] (0x268eb60) (0x268ed20) Create stream\nI0819 03:17:04.099751    1901 log.go:172] (0x268eb60) (0x268ed20) Stream added, broadcasting: 1\nI0819 03:17:04.112640    1901 log.go:172] (0x268eb60) Reply frame received for 1\nI0819 03:17:04.113151    1901 log.go:172] (0x268eb60) (0x268fb20) Create stream\nI0819 03:17:04.113211    1901 log.go:172] (0x268eb60) (0x268fb20) Stream added, broadcasting: 3\nI0819 03:17:04.114790    1901 log.go:172] (0x268eb60) Reply frame received for 3\nI0819 03:17:04.114990    1901 log.go:172] (0x268eb60) (0x24c0620) Create stream\nI0819 03:17:04.115052    1901 log.go:172] (0x268eb60) (0x24c0620) Stream added, broadcasting: 5\nI0819 03:17:04.116296    1901 log.go:172] (0x268eb60) Reply frame received for 5\nI0819 03:17:04.171835    1901 log.go:172] (0x268eb60) Data frame received for 5\nI0819 03:17:04.172181    1901 log.go:172] (0x24c0620) (5) Data frame handling\nI0819 03:17:04.172991    1901 log.go:172] (0x24c0620) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 03:17:04.201067    1901 log.go:172] (0x268eb60) Data frame received for 3\nI0819 03:17:04.201251    1901 log.go:172] (0x268fb20) (3) Data frame handling\nI0819 03:17:04.201440    1901 log.go:172] (0x268fb20) (3) Data frame sent\nI0819 03:17:04.201613    1901 log.go:172] (0x268eb60) Data frame received for 3\nI0819 03:17:04.201947    1901 log.go:172] (0x268eb60) Data frame received for 5\nI0819 03:17:04.202154    1901 log.go:172] (0x24c0620) (5) Data frame handling\nI0819 03:17:04.202896    1901 log.go:172] (0x268fb20) (3) Data frame handling\nI0819 03:17:04.203950    1901 log.go:172] (0x268eb60) Data frame received for 1\nI0819 03:17:04.204067    1901 log.go:172] (0x268ed20) (1) Data frame handling\nI0819 03:17:04.204186    1901 log.go:172] (0x268ed20) (1) Data frame sent\nI0819 03:17:04.205021    1901 log.go:172] (0x268eb60) (0x268ed20) Stream removed, broadcasting: 1\nI0819 03:17:04.207891    1901 log.go:172] (0x268eb60) Go away received\nI0819 03:17:04.209594    1901 log.go:172] (0x268eb60) (0x268ed20) Stream removed, broadcasting: 1\nI0819 03:17:04.209859    1901 log.go:172] (0x268eb60) (0x268fb20) Stream removed, broadcasting: 3\nI0819 03:17:04.210088    1901 log.go:172] (0x268eb60) (0x24c0620) Stream removed, broadcasting: 5\n"
Aug 19 03:17:04.220: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 03:17:04.221: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 03:17:04.226: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 19 03:17:14.231: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 03:17:14.231: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 03:17:14.259: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:14.261: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:14.262: INFO: ss-1                 Pending         []
Aug 19 03:17:14.262: INFO: 
Aug 19 03:17:14.262: INFO: StatefulSet ss has not reached scale 3, at 2
Aug 19 03:17:15.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981722482s
Aug 19 03:17:16.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.936460544s
Aug 19 03:17:17.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.84883228s
Aug 19 03:17:18.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.841196005s
Aug 19 03:17:19.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.83614702s
Aug 19 03:17:20.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.797663446s
Aug 19 03:17:21.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.790613401s
Aug 19 03:17:22.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.782864045s
Aug 19 03:17:23.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 508.438629ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1612
Aug 19 03:17:25.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 03:17:26.503: INFO: stderr: "I0819 03:17:26.407835    1925 log.go:172] (0x29ddf10) (0x29ddf80) Create stream\nI0819 03:17:26.410850    1925 log.go:172] (0x29ddf10) (0x29ddf80) Stream added, broadcasting: 1\nI0819 03:17:26.426596    1925 log.go:172] (0x29ddf10) Reply frame received for 1\nI0819 03:17:26.427021    1925 log.go:172] (0x29ddf10) (0x24a40e0) Create stream\nI0819 03:17:26.427080    1925 log.go:172] (0x29ddf10) (0x24a40e0) Stream added, broadcasting: 3\nI0819 03:17:26.428182    1925 log.go:172] (0x29ddf10) Reply frame received for 3\nI0819 03:17:26.428398    1925 log.go:172] (0x29ddf10) (0x2689b90) Create stream\nI0819 03:17:26.428473    1925 log.go:172] (0x29ddf10) (0x2689b90) Stream added, broadcasting: 5\nI0819 03:17:26.429471    1925 log.go:172] (0x29ddf10) Reply frame received for 5\nI0819 03:17:26.488841    1925 log.go:172] (0x29ddf10) Data frame received for 3\nI0819 03:17:26.489130    1925 log.go:172] (0x29ddf10) Data frame received for 5\nI0819 03:17:26.489451    1925 log.go:172] (0x2689b90) (5) Data frame handling\nI0819 03:17:26.489572    1925 log.go:172] (0x24a40e0) (3) Data frame handling\nI0819 03:17:26.489875    1925 log.go:172] (0x29ddf10) Data frame received for 1\nI0819 03:17:26.490088    1925 log.go:172] (0x29ddf80) (1) Data frame handling\nI0819 03:17:26.490390    1925 log.go:172] (0x29ddf80) (1) Data frame sent\nI0819 03:17:26.490556    1925 log.go:172] (0x24a40e0) (3) Data frame sent\nI0819 03:17:26.490681    1925 log.go:172] (0x2689b90) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 03:17:26.491151    1925 log.go:172] (0x29ddf10) Data frame received for 5\nI0819 03:17:26.491275    1925 log.go:172] (0x2689b90) (5) Data frame handling\nI0819 03:17:26.491492    1925 log.go:172] (0x29ddf10) Data frame received for 3\nI0819 03:17:26.491681    1925 log.go:172] (0x29ddf10) (0x29ddf80) Stream removed, broadcasting: 1\nI0819 03:17:26.492551    1925 log.go:172] (0x24a40e0) (3) Data frame handling\nI0819 03:17:26.494349    1925 log.go:172] (0x29ddf10) Go away received\nI0819 03:17:26.496854    1925 log.go:172] (0x29ddf10) (0x29ddf80) Stream removed, broadcasting: 1\nI0819 03:17:26.497183    1925 log.go:172] (0x29ddf10) (0x24a40e0) Stream removed, broadcasting: 3\nI0819 03:17:26.497346    1925 log.go:172] (0x29ddf10) (0x2689b90) Stream removed, broadcasting: 5\n"
Aug 19 03:17:26.504: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 03:17:26.504: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 03:17:26.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 03:17:27.850: INFO: stderr: "I0819 03:17:27.748195    1946 log.go:172] (0x292f260) (0x292f2d0) Create stream\nI0819 03:17:27.749949    1946 log.go:172] (0x292f260) (0x292f2d0) Stream added, broadcasting: 1\nI0819 03:17:27.765778    1946 log.go:172] (0x292f260) Reply frame received for 1\nI0819 03:17:27.766275    1946 log.go:172] (0x292f260) (0x26d6070) Create stream\nI0819 03:17:27.766364    1946 log.go:172] (0x292f260) (0x26d6070) Stream added, broadcasting: 3\nI0819 03:17:27.767578    1946 log.go:172] (0x292f260) Reply frame received for 3\nI0819 03:17:27.767894    1946 log.go:172] (0x292f260) (0x24ac930) Create stream\nI0819 03:17:27.767975    1946 log.go:172] (0x292f260) (0x24ac930) Stream added, broadcasting: 5\nI0819 03:17:27.769202    1946 log.go:172] (0x292f260) Reply frame received for 5\nI0819 03:17:27.834342    1946 log.go:172] (0x292f260) Data frame received for 3\nI0819 03:17:27.834641    1946 log.go:172] (0x292f260) Data frame received for 1\nI0819 03:17:27.834743    1946 log.go:172] (0x292f2d0) (1) Data frame handling\nI0819 03:17:27.834916    1946 log.go:172] (0x292f260) Data frame received for 5\nI0819 03:17:27.835115    1946 log.go:172] (0x24ac930) (5) Data frame handling\nI0819 03:17:27.835257    1946 log.go:172] (0x26d6070) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0819 03:17:27.837104    1946 log.go:172] (0x292f2d0) (1) Data frame sent\nI0819 03:17:27.837439    1946 log.go:172] (0x26d6070) (3) Data frame sent\nI0819 03:17:27.837565    1946 log.go:172] (0x292f260) Data frame received for 3\nI0819 03:17:27.837669    1946 log.go:172] (0x26d6070) (3) Data frame handling\nI0819 03:17:27.838123    1946 log.go:172] (0x24ac930) (5) Data frame sent\nI0819 03:17:27.838331    1946 log.go:172] (0x292f260) (0x292f2d0) Stream removed, broadcasting: 1\nI0819 03:17:27.838720    1946 log.go:172] (0x292f260) Data frame received for 5\nI0819 03:17:27.839197    1946 log.go:172] (0x24ac930) (5) Data frame handling\nI0819 03:17:27.839564    1946 log.go:172] (0x292f260) Go away received\nI0819 03:17:27.842084    1946 log.go:172] (0x292f260) (0x292f2d0) Stream removed, broadcasting: 1\nI0819 03:17:27.842254    1946 log.go:172] (0x292f260) (0x26d6070) Stream removed, broadcasting: 3\nI0819 03:17:27.842397    1946 log.go:172] (0x292f260) (0x24ac930) Stream removed, broadcasting: 5\n"
Aug 19 03:17:27.851: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 03:17:27.851: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 03:17:27.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 03:17:29.246: INFO: stderr: "I0819 03:17:29.110525    1970 log.go:172] (0x2a04a10) (0x2a04a80) Create stream\nI0819 03:17:29.114218    1970 log.go:172] (0x2a04a10) (0x2a04a80) Stream added, broadcasting: 1\nI0819 03:17:29.134601    1970 log.go:172] (0x2a04a10) Reply frame received for 1\nI0819 03:17:29.135088    1970 log.go:172] (0x2a04a10) (0x2b6a000) Create stream\nI0819 03:17:29.135160    1970 log.go:172] (0x2a04a10) (0x2b6a000) Stream added, broadcasting: 3\nI0819 03:17:29.136240    1970 log.go:172] (0x2a04a10) Reply frame received for 3\nI0819 03:17:29.136454    1970 log.go:172] (0x2a04a10) (0x2b6a070) Create stream\nI0819 03:17:29.136508    1970 log.go:172] (0x2a04a10) (0x2b6a070) Stream added, broadcasting: 5\nI0819 03:17:29.137545    1970 log.go:172] (0x2a04a10) Reply frame received for 5\nI0819 03:17:29.226900    1970 log.go:172] (0x2a04a10) Data frame received for 3\nI0819 03:17:29.227327    1970 log.go:172] (0x2a04a10) Data frame received for 5\nI0819 03:17:29.227679    1970 log.go:172] (0x2a04a10) Data frame received for 1\nI0819 03:17:29.227832    1970 log.go:172] (0x2b6a070) (5) Data frame handling\nI0819 03:17:29.228053    1970 log.go:172] (0x2b6a000) (3) Data frame handling\nI0819 03:17:29.228433    1970 log.go:172] (0x2a04a80) (1) Data frame handling\nI0819 03:17:29.229468    1970 log.go:172] (0x2b6a000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0819 03:17:29.229919    1970 log.go:172] (0x2a04a10) Data frame received for 3\nI0819 03:17:29.230566    1970 log.go:172] (0x2b6a000) (3) Data frame handling\nI0819 03:17:29.230706    1970 log.go:172] (0x2b6a070) (5) Data frame sent\nI0819 03:17:29.230923    1970 log.go:172] (0x2a04a10) Data frame received for 5\nI0819 03:17:29.231119    1970 log.go:172] (0x2b6a070) (5) Data frame handling\nI0819 03:17:29.233280    1970 log.go:172] (0x2a04a80) (1) Data frame sent\nI0819 03:17:29.234143    1970 log.go:172] (0x2a04a10) (0x2a04a80) Stream removed, broadcasting: 1\nI0819 03:17:29.234512    1970 log.go:172] (0x2a04a10) Go away received\nI0819 03:17:29.236945    1970 log.go:172] (0x2a04a10) (0x2a04a80) Stream removed, broadcasting: 1\nI0819 03:17:29.237159    1970 log.go:172] (0x2a04a10) (0x2b6a000) Stream removed, broadcasting: 3\nI0819 03:17:29.237323    1970 log.go:172] (0x2a04a10) (0x2b6a070) Stream removed, broadcasting: 5\n"
Aug 19 03:17:29.247: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 03:17:29.247: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 03:17:29.254: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:17:29.254: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:17:29.254: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 19 03:17:29.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 03:17:30.612: INFO: stderr: "I0819 03:17:30.511206    1992 log.go:172] (0x2694150) (0x26943f0) Create stream\nI0819 03:17:30.513465    1992 log.go:172] (0x2694150) (0x26943f0) Stream added, broadcasting: 1\nI0819 03:17:30.521970    1992 log.go:172] (0x2694150) Reply frame received for 1\nI0819 03:17:30.522693    1992 log.go:172] (0x2694150) (0x26944d0) Create stream\nI0819 03:17:30.522780    1992 log.go:172] (0x2694150) (0x26944d0) Stream added, broadcasting: 3\nI0819 03:17:30.524222    1992 log.go:172] (0x2694150) Reply frame received for 3\nI0819 03:17:30.524480    1992 log.go:172] (0x2694150) (0x2ade000) Create stream\nI0819 03:17:30.524552    1992 log.go:172] (0x2694150) (0x2ade000) Stream added, broadcasting: 5\nI0819 03:17:30.525675    1992 log.go:172] (0x2694150) Reply frame received for 5\nI0819 03:17:30.595324    1992 log.go:172] (0x2694150) Data frame received for 5\nI0819 03:17:30.595494    1992 log.go:172] (0x2694150) Data frame received for 3\nI0819 03:17:30.595721    1992 log.go:172] (0x2694150) Data frame received for 1\nI0819 03:17:30.595937    1992 log.go:172] (0x26943f0) (1) Data frame handling\nI0819 03:17:30.596066    1992 log.go:172] (0x26944d0) (3) Data frame handling\nI0819 03:17:30.596303    1992 log.go:172] (0x2ade000) (5) Data frame handling\nI0819 03:17:30.597074    1992 log.go:172] (0x2ade000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 03:17:30.597530    1992 log.go:172] (0x26944d0) (3) Data frame sent\nI0819 03:17:30.597782    1992 log.go:172] (0x2694150) Data frame received for 5\nI0819 03:17:30.597929    1992 log.go:172] (0x2ade000) (5) Data frame handling\nI0819 03:17:30.598173    1992 log.go:172] (0x2694150) Data frame received for 3\nI0819 03:17:30.598284    1992 log.go:172] (0x26944d0) (3) Data frame handling\nI0819 03:17:30.598572    1992 log.go:172] (0x26943f0) (1) Data frame sent\nI0819 03:17:30.601121    1992 log.go:172] (0x2694150) (0x26943f0) Stream removed, broadcasting: 1\nI0819 03:17:30.601472    1992 log.go:172] (0x2694150) Go away received\nI0819 03:17:30.604361    1992 log.go:172] (0x2694150) (0x26943f0) Stream removed, broadcasting: 1\nI0819 03:17:30.604705    1992 log.go:172] (0x2694150) (0x26944d0) Stream removed, broadcasting: 3\nI0819 03:17:30.604971    1992 log.go:172] (0x2694150) (0x2ade000) Stream removed, broadcasting: 5\n"
Aug 19 03:17:30.613: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 03:17:30.613: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 03:17:30.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 03:17:32.413: INFO: stderr: "I0819 03:17:31.902964    2014 log.go:172] (0x295db90) (0x295dc00) Create stream\nI0819 03:17:31.905683    2014 log.go:172] (0x295db90) (0x295dc00) Stream added, broadcasting: 1\nI0819 03:17:31.921295    2014 log.go:172] (0x295db90) Reply frame received for 1\nI0819 03:17:31.921937    2014 log.go:172] (0x295db90) (0x2644150) Create stream\nI0819 03:17:31.922014    2014 log.go:172] (0x295db90) (0x2644150) Stream added, broadcasting: 3\nI0819 03:17:31.923326    2014 log.go:172] (0x295db90) Reply frame received for 3\nI0819 03:17:31.923583    2014 log.go:172] (0x295db90) (0x2676f50) Create stream\nI0819 03:17:31.923645    2014 log.go:172] (0x295db90) (0x2676f50) Stream added, broadcasting: 5\nI0819 03:17:31.924646    2014 log.go:172] (0x295db90) Reply frame received for 5\nI0819 03:17:31.995161    2014 log.go:172] (0x295db90) Data frame received for 5\nI0819 03:17:31.995463    2014 log.go:172] (0x2676f50) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 03:17:31.996487    2014 log.go:172] (0x2676f50) (5) Data frame sent\nI0819 03:17:32.395241    2014 log.go:172] (0x295db90) Data frame received for 5\nI0819 03:17:32.395432    2014 log.go:172] (0x2676f50) (5) Data frame handling\nI0819 03:17:32.395607    2014 log.go:172] (0x295db90) Data frame received for 3\nI0819 03:17:32.395794    2014 log.go:172] (0x2644150) (3) Data frame handling\nI0819 03:17:32.395972    2014 log.go:172] (0x2644150) (3) Data frame sent\nI0819 03:17:32.396168    2014 log.go:172] (0x295db90) Data frame received for 3\nI0819 03:17:32.396332    2014 log.go:172] (0x2644150) (3) Data frame handling\nI0819 03:17:32.397037    2014 log.go:172] (0x295db90) Data frame received for 1\nI0819 03:17:32.397238    2014 log.go:172] (0x295dc00) (1) Data frame handling\nI0819 03:17:32.397476    2014 log.go:172] (0x295dc00) (1) Data frame sent\nI0819 03:17:32.398771    2014 log.go:172] (0x295db90) (0x295dc00) Stream removed, broadcasting: 1\nI0819 03:17:32.400884    2014 log.go:172] (0x295db90) Go away received\nI0819 03:17:32.402955    2014 log.go:172] (0x295db90) (0x295dc00) Stream removed, broadcasting: 1\nI0819 03:17:32.403271    2014 log.go:172] (0x295db90) (0x2644150) Stream removed, broadcasting: 3\nI0819 03:17:32.403450    2014 log.go:172] (0x295db90) (0x2676f50) Stream removed, broadcasting: 5\n"
Aug 19 03:17:32.413: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 03:17:32.413: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 03:17:32.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 03:17:34.049: INFO: stderr: "I0819 03:17:33.682036    2037 log.go:172] (0x2a61490) (0x2a61500) Create stream\nI0819 03:17:33.685129    2037 log.go:172] (0x2a61490) (0x2a61500) Stream added, broadcasting: 1\nI0819 03:17:33.704514    2037 log.go:172] (0x2a61490) Reply frame received for 1\nI0819 03:17:33.705104    2037 log.go:172] (0x2a61490) (0x284d0a0) Create stream\nI0819 03:17:33.705175    2037 log.go:172] (0x2a61490) (0x284d0a0) Stream added, broadcasting: 3\nI0819 03:17:33.706596    2037 log.go:172] (0x2a61490) Reply frame received for 3\nI0819 03:17:33.706822    2037 log.go:172] (0x2a61490) (0x26a0230) Create stream\nI0819 03:17:33.706885    2037 log.go:172] (0x2a61490) (0x26a0230) Stream added, broadcasting: 5\nI0819 03:17:33.708017    2037 log.go:172] (0x2a61490) Reply frame received for 5\nI0819 03:17:33.769871    2037 log.go:172] (0x2a61490) Data frame received for 5\nI0819 03:17:33.770159    2037 log.go:172] (0x26a0230) (5) Data frame handling\nI0819 03:17:33.770756    2037 log.go:172] (0x26a0230) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 03:17:34.023293    2037 log.go:172] (0x2a61490) Data frame received for 3\nI0819 03:17:34.023497    2037 log.go:172] (0x284d0a0) (3) Data frame handling\nI0819 03:17:34.023606    2037 log.go:172] (0x2a61490) Data frame received for 5\nI0819 03:17:34.023766    2037 log.go:172] (0x26a0230) (5) Data frame handling\nI0819 03:17:34.023846    2037 log.go:172] (0x284d0a0) (3) Data frame sent\nI0819 03:17:34.023931    2037 log.go:172] (0x2a61490) Data frame received for 3\nI0819 03:17:34.024029    2037 log.go:172] (0x284d0a0) (3) Data frame handling\nI0819 03:17:34.025024    2037 log.go:172] (0x2a61490) Data frame received for 1\nI0819 03:17:34.025111    2037 log.go:172] (0x2a61500) (1) Data frame handling\nI0819 03:17:34.025201    2037 log.go:172] (0x2a61500) (1) Data frame sent\nI0819 03:17:34.025732    2037 log.go:172] (0x2a61490) (0x2a61500) Stream removed, broadcasting: 1\nI0819 03:17:34.027503    2037 log.go:172] (0x2a61490) Go away received\nI0819 03:17:34.029563    2037 log.go:172] (0x2a61490) (0x2a61500) Stream removed, broadcasting: 1\nI0819 03:17:34.029936    2037 log.go:172] (0x2a61490) (0x284d0a0) Stream removed, broadcasting: 3\nI0819 03:17:34.030115    2037 log.go:172] (0x2a61490) (0x26a0230) Stream removed, broadcasting: 5\n"
Aug 19 03:17:34.054: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 03:17:34.054: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 03:17:34.054: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 03:17:34.058: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 19 03:17:44.070: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 03:17:44.070: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 03:17:44.070: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 03:17:44.087: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:44.087: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:44.087: INFO: ss-1  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:44.088: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:44.088: INFO: 
Aug 19 03:17:44.088: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:45.094: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:45.094: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:45.094: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:45.094: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:45.095: INFO: 
Aug 19 03:17:45.095: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:46.359: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:46.359: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:46.359: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:46.360: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:46.360: INFO: 
Aug 19 03:17:46.360: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:47.365: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:47.366: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:47.366: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:47.366: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:47.366: INFO: 
Aug 19 03:17:47.366: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:48.374: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:48.374: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:48.375: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:48.375: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:48.376: INFO: 
Aug 19 03:17:48.376: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:49.392: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:49.393: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:49.393: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:49.394: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:49.394: INFO: 
Aug 19 03:17:49.394: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:50.402: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:50.402: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:50.403: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:50.404: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:50.404: INFO: 
Aug 19 03:17:50.404: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:51.412: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:51.412: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:51.413: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:51.413: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:51.414: INFO: 
Aug 19 03:17:51.414: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:52.421: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:52.421: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:52.422: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:52.422: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:52.423: INFO: 
Aug 19 03:17:52.423: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 19 03:17:53.430: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 19 03:17:53.430: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:16:52 +0000 UTC  }]
Aug 19 03:17:53.431: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:17:14 +0000 UTC  }]
Aug 19 03:17:53.431: INFO: 
Aug 19 03:17:53.431: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1612
Aug 19 03:17:54.799: INFO: Scaling statefulset ss to 0
Aug 19 03:17:54.839: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 03:17:54.842: INFO: Deleting all statefulset in ns statefulset-1612
Aug 19 03:17:54.845: INFO: Scaling statefulset ss to 0
Aug 19 03:17:54.854: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 03:17:54.857: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:17:54.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1612" for this suite.
Aug 19 03:18:04.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:18:05.062: INFO: namespace statefulset-1612 deletion completed in 10.15976003s

• [SLOW TEST:72.344 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:18:05.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-afa76966-8d04-4ac5-babe-e09844a612b4
STEP: Creating a pod to test consume secrets
Aug 19 03:18:05.256: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2" in namespace "projected-4381" to be "success or failure"
Aug 19 03:18:05.272: INFO: Pod "pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.723792ms
Aug 19 03:18:07.278: INFO: Pod "pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022273471s
Aug 19 03:18:09.370: INFO: Pod "pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113935509s
Aug 19 03:18:11.376: INFO: Pod "pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120432243s
STEP: Saw pod success
Aug 19 03:18:11.376: INFO: Pod "pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2" satisfied condition "success or failure"
Aug 19 03:18:11.387: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 03:18:11.419: INFO: Waiting for pod pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2 to disappear
Aug 19 03:18:11.427: INFO: Pod pod-projected-secrets-654ad7ae-25e3-4c34-8a17-1cb8633154c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:18:11.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4381" for this suite.
Aug 19 03:18:17.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:18:17.597: INFO: namespace projected-4381 deletion completed in 6.158631516s

• [SLOW TEST:12.533 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:18:17.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Aug 19 03:18:17.679: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2332" to be "success or failure"
Aug 19 03:18:17.700: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.620074ms
Aug 19 03:18:19.717: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037590976s
Aug 19 03:18:21.722: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04230326s
Aug 19 03:18:23.861: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181783095s
STEP: Saw pod success
Aug 19 03:18:23.861: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 19 03:18:23.865: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 19 03:18:24.459: INFO: Waiting for pod pod-host-path-test to disappear
Aug 19 03:18:24.468: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:18:24.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2332" for this suite.
Aug 19 03:18:34.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:18:34.923: INFO: namespace hostpath-2332 deletion completed in 10.448241366s

• [SLOW TEST:17.325 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:18:34.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-602.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-602.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-602.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 03:18:46.822: INFO: DNS probes using dns-test-6ed7555b-6d82-4ef6-ae71-1e4d3dad9f47 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-602.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-602.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-602.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 03:18:58.223: INFO: File wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:18:58.230: INFO: File jessie_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains '' instead of 'bar.example.com.'
Aug 19 03:18:58.230: INFO: Lookups using dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 failed for: [wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local jessie_udp@dns-test-service-3.dns-602.svc.cluster.local]

Aug 19 03:19:03.259: INFO: File wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:19:03.264: INFO: File jessie_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:19:03.264: INFO: Lookups using dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 failed for: [wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local jessie_udp@dns-test-service-3.dns-602.svc.cluster.local]

Aug 19 03:19:08.238: INFO: File wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:19:08.242: INFO: File jessie_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:19:08.242: INFO: Lookups using dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 failed for: [wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local jessie_udp@dns-test-service-3.dns-602.svc.cluster.local]

Aug 19 03:19:13.237: INFO: File wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:19:13.241: INFO: File jessie_udp@dns-test-service-3.dns-602.svc.cluster.local from pod  dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 03:19:13.241: INFO: Lookups using dns-602/dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 failed for: [wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local jessie_udp@dns-test-service-3.dns-602.svc.cluster.local]

Aug 19 03:19:18.242: INFO: DNS probes using dns-test-d21a0a50-faf3-437d-8d70-1f186be321a6 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-602.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-602.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-602.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-602.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 03:19:31.077: INFO: DNS probes using dns-test-a2b22c13-22ab-4e82-a90a-5dce27d00e40 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:19:31.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-602" for this suite.
Aug 19 03:19:37.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:19:37.306: INFO: namespace dns-602 deletion completed in 6.165378627s

• [SLOW TEST:62.382 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:19:37.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 19 03:19:37.414: INFO: Waiting up to 5m0s for pod "client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b" in namespace "containers-8213" to be "success or failure"
Aug 19 03:19:37.446: INFO: Pod "client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.985467ms
Aug 19 03:19:39.472: INFO: Pod "client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057497774s
Aug 19 03:19:41.479: INFO: Pod "client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06464724s
STEP: Saw pod success
Aug 19 03:19:41.480: INFO: Pod "client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b" satisfied condition "success or failure"
Aug 19 03:19:41.545: INFO: Trying to get logs from node iruya-worker2 pod client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b container test-container: 
STEP: delete the pod
Aug 19 03:19:41.611: INFO: Waiting for pod client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b to disappear
Aug 19 03:19:41.626: INFO: Pod client-containers-374bdc6d-8200-400a-a71f-9b6a50f6192b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:19:41.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8213" for this suite.
Aug 19 03:19:47.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:19:47.829: INFO: namespace containers-8213 deletion completed in 6.195992397s

• [SLOW TEST:10.518 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:19:47.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 19 03:19:47.910: INFO: Waiting up to 5m0s for pod "pod-39827665-61ff-4243-bb40-f04a50423c52" in namespace "emptydir-1335" to be "success or failure"
Aug 19 03:19:47.920: INFO: Pod "pod-39827665-61ff-4243-bb40-f04a50423c52": Phase="Pending", Reason="", readiness=false. Elapsed: 9.765708ms
Aug 19 03:19:50.150: INFO: Pod "pod-39827665-61ff-4243-bb40-f04a50423c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240139035s
Aug 19 03:19:52.158: INFO: Pod "pod-39827665-61ff-4243-bb40-f04a50423c52": Phase="Running", Reason="", readiness=true. Elapsed: 4.248034706s
Aug 19 03:19:54.166: INFO: Pod "pod-39827665-61ff-4243-bb40-f04a50423c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.256056277s
STEP: Saw pod success
Aug 19 03:19:54.166: INFO: Pod "pod-39827665-61ff-4243-bb40-f04a50423c52" satisfied condition "success or failure"
Aug 19 03:19:54.173: INFO: Trying to get logs from node iruya-worker pod pod-39827665-61ff-4243-bb40-f04a50423c52 container test-container: 
STEP: delete the pod
Aug 19 03:19:54.239: INFO: Waiting for pod pod-39827665-61ff-4243-bb40-f04a50423c52 to disappear
Aug 19 03:19:54.285: INFO: Pod pod-39827665-61ff-4243-bb40-f04a50423c52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:19:54.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1335" for this suite.
Aug 19 03:20:00.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:20:00.493: INFO: namespace emptydir-1335 deletion completed in 6.194369662s

• [SLOW TEST:12.662 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:20:00.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:20:00.627: INFO: Creating deployment "nginx-deployment"
Aug 19 03:20:00.633: INFO: Waiting for observed generation 1
Aug 19 03:20:02.701: INFO: Waiting for all required pods to come up
Aug 19 03:20:02.712: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 19 03:20:14.728: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 19 03:20:14.740: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 19 03:20:14.749: INFO: Updating deployment nginx-deployment
Aug 19 03:20:14.750: INFO: Waiting for observed generation 2
Aug 19 03:20:16.787: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 19 03:20:16.792: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 19 03:20:16.796: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 19 03:20:16.810: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 19 03:20:16.811: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 19 03:20:16.814: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 19 03:20:16.821: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 19 03:20:16.822: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 19 03:20:16.830: INFO: Updating deployment nginx-deployment
Aug 19 03:20:16.830: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 19 03:20:17.004: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 19 03:20:17.138: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 03:20:19.525: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2235,SelfLink:/apis/apps/v1/namespaces/deployment-2235/deployments/nginx-deployment,UID:4e33d3e1-d849-464b-b7c0-89a8a9224939,ResourceVersion:972440,Generation:3,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-19 03:20:16 +0000 UTC 2020-08-19 03:20:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-19 03:20:17 +0000 UTC 2020-08-19 03:20:00 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 19 03:20:19.590: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2235,SelfLink:/apis/apps/v1/namespaces/deployment-2235/replicasets/nginx-deployment-55fb7cb77f,UID:aacada36-835a-4cd0-9d0b-27d39f6477c0,ResourceVersion:972438,Generation:3,CreationTimestamp:2020-08-19 03:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4e33d3e1-d849-464b-b7c0-89a8a9224939 0x8eeaf57 0x8eeaf58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 03:20:19.590: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 19 03:20:19.591: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2235,SelfLink:/apis/apps/v1/namespaces/deployment-2235/replicasets/nginx-deployment-7b8c6f4498,UID:d3a14e4c-459a-4e0e-a121-f1bc1f079aca,ResourceVersion:972421,Generation:3,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4e33d3e1-d849-464b-b7c0-89a8a9224939 0x8eeb027 0x8eeb028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 19 03:20:19.611: INFO: Pod "nginx-deployment-55fb7cb77f-4qljz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4qljz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-4qljz,UID:3ec520ad-86b1-4cc1-b653-f28bc8684677,ResourceVersion:972338,Generation:0,CreationTimestamp:2020-08-19 03:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0297 0x8ae0298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0310} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.612: INFO: Pod "nginx-deployment-55fb7cb77f-9ltkn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9ltkn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-9ltkn,UID:31d8e7c1-96d1-4470-aa4d-aecb132fd70a,ResourceVersion:972468,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0400 0x8ae0401}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0480} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae04a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.614: INFO: Pod "nginx-deployment-55fb7cb77f-b2q7d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b2q7d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-b2q7d,UID:61e3a3ef-da95-4d1e-8d5b-c092385efb39,ResourceVersion:972497,Generation:0,CreationTimestamp:2020-08-19 03:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0570 0x8ae0571}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae05f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.114,StartTime:2020-08-19 03:20:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.616: INFO: Pod "nginx-deployment-55fb7cb77f-cllzn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cllzn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-cllzn,UID:c4b78b33-c7a3-4374-a85c-f141f0e2d1c2,ResourceVersion:972492,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0700 0x8ae0701}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0780} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae07a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.617: INFO: Pod "nginx-deployment-55fb7cb77f-fk82n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fk82n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-fk82n,UID:3d942d5a-51a4-4882-9ddb-022b3f03820b,ResourceVersion:972354,Generation:0,CreationTimestamp:2020-08-19 03:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0870 0x8ae0871}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae08f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.618: INFO: Pod "nginx-deployment-55fb7cb77f-j7k9z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j7k9z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-j7k9z,UID:0dd210bd-7ccf-4646-b336-e2cece584143,ResourceVersion:972494,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae09e0 0x8ae09e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0a60} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.619: INFO: Pod "nginx-deployment-55fb7cb77f-jwfsd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jwfsd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-jwfsd,UID:4edee5e6-5ae0-443e-9810-e26e2e9a51ad,ResourceVersion:972490,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0b50 0x8ae0b51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0be0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.620: INFO: Pod "nginx-deployment-55fb7cb77f-lt4h5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lt4h5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-lt4h5,UID:df6ac245-8e7a-4b84-9921-535c3bf664ae,ResourceVersion:972471,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0cd0 0x8ae0cd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0d50} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.622: INFO: Pod "nginx-deployment-55fb7cb77f-lzcdb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lzcdb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-lzcdb,UID:a5ef4139-c74c-406a-b0d9-9e89af098dcc,ResourceVersion:972458,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0e50 0x8ae0e51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae0ed0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae0ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.622: INFO: Pod "nginx-deployment-55fb7cb77f-pd9kk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pd9kk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-pd9kk,UID:2a7f37f7-f88f-4bd9-9b36-ed1638cbdb47,ResourceVersion:972454,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae0fc0 0x8ae0fc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae1040} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae1060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.624: INFO: Pod "nginx-deployment-55fb7cb77f-t8qlh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t8qlh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-t8qlh,UID:65578c89-e97d-4fb8-9a1f-eae89842732d,ResourceVersion:972357,Generation:0,CreationTimestamp:2020-08-19 03:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae1130 0x8ae1131}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae11f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae1260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.625: INFO: Pod "nginx-deployment-55fb7cb77f-wt8ms" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wt8ms,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-wt8ms,UID:6d21487e-bf4e-4217-9cc3-4ac3dbb22681,ResourceVersion:972434,Generation:0,CreationTimestamp:2020-08-19 03:20:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae1470 0x8ae1471}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae1500} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae1520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.626: INFO: Pod "nginx-deployment-55fb7cb77f-xrngw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xrngw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-55fb7cb77f-xrngw,UID:35eec60c-df74-4887-a510-23930fd6e514,ResourceVersion:972344,Generation:0,CreationTimestamp:2020-08-19 03:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aacada36-835a-4cd0-9d0b-27d39f6477c0 0x8ae1630 0x8ae1631}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae16d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae1700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.627: INFO: Pod "nginx-deployment-7b8c6f4498-57bvf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-57bvf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-57bvf,UID:675b1f7c-0b02-4d1d-bab5-21c8010998c8,ResourceVersion:972284,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8ae1950 0x8ae1951}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae19c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae19e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.112,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://62a5565ce0ebe5f0f83e3d36ea565c20ac7e8be971d12fa29699f52f63b07fa7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.628: INFO: Pod "nginx-deployment-7b8c6f4498-68zbj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-68zbj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-68zbj,UID:a8e546db-74fe-46ae-bf73-e95e0a231c3a,ResourceVersion:972244,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8ae1be0 0x8ae1be1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae1ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae1d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.109,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6f30e1c97d471d3bfacbc5afae9be0aa4b3b7b490cea7fb3b13862a57afb10e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.629: INFO: Pod "nginx-deployment-7b8c6f4498-6tspj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6tspj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-6tspj,UID:7a17546d-3acf-4e50-931d-758cdcc155f0,ResourceVersion:972483,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8ae1e80 0x8ae1e81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ae1ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ae1f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.630: INFO: Pod "nginx-deployment-7b8c6f4498-7kgr6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7kgr6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-7kgr6,UID:d1970e6d-b93e-4bca-8936-a97a92738aa6,ResourceVersion:972460,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36050 0x8f36051}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f360c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f360e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.631: INFO: Pod "nginx-deployment-7b8c6f4498-98fgc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98fgc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-98fgc,UID:f15e24cf-16aa-4550-bdf2-2f18718e6d24,ResourceVersion:972274,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f361a0 0x8f361a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36210} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.38,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1edcd1bd31e1dda7ab22a98a1d37cde240b661c2bc3519d91a4a565a47f72da1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.632: INFO: Pod "nginx-deployment-7b8c6f4498-9m8n5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9m8n5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-9m8n5,UID:e863b91d-bebc-4704-9908-ba52910f3782,ResourceVersion:972292,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36300 0x8f36301}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36370} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.39,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8e71bf9fde719580c29056f98aec639fb7570a689d7536fbd7537930f1267130}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.633: INFO: Pod "nginx-deployment-7b8c6f4498-9ngnf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9ngnf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-9ngnf,UID:83260f27-b03c-4585-957b-d4f5662d7a39,ResourceVersion:972446,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36460 0x8f36461}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f364d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f364f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.634: INFO: Pod "nginx-deployment-7b8c6f4498-cx6k8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cx6k8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-cx6k8,UID:8877f1dc-3ad4-48e9-a81d-9064bcfecad8,ResourceVersion:972442,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f365b0 0x8f365b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36620} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.635: INFO: Pod "nginx-deployment-7b8c6f4498-dfnpz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dfnpz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-dfnpz,UID:185968bc-f73c-4c17-b75c-ad76d0ad6c60,ResourceVersion:972252,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36700 0x8f36701}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36770} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.37,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9392d155f249e2ebfd7316e85c8425170081d2581ede5d6f44b479494db9f39e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.636: INFO: Pod "nginx-deployment-7b8c6f4498-fs8mv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fs8mv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-fs8mv,UID:9fba58e6-824f-4151-a2ef-126758489a5c,ResourceVersion:972429,Generation:0,CreationTimestamp:2020-08-19 03:20:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36860 0x8f36861}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f368d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f368f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.637: INFO: Pod "nginx-deployment-7b8c6f4498-g556n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g556n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-g556n,UID:1165a143-679d-4295-a932-b6aaa03b6cc2,ResourceVersion:972449,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f369b0 0x8f369b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36a20} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.638: INFO: Pod "nginx-deployment-7b8c6f4498-l7x8z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l7x8z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-l7x8z,UID:5f5e0bc4-ac39-4f1b-9fc9-d6b6b8eb16e3,ResourceVersion:972488,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36b00 0x8f36b01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36b70} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.639: INFO: Pod "nginx-deployment-7b8c6f4498-lqx29" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lqx29,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-lqx29,UID:051bbe5a-656b-4dab-8b73-068066757930,ResourceVersion:972263,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36c50 0x8f36c51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.110,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://535e8d230e5a54b270ae3e7d3c45acc0f47be4c29ca702558fc8cac0cb532451}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.640: INFO: Pod "nginx-deployment-7b8c6f4498-nf5rt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nf5rt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-nf5rt,UID:1eb04861-d31b-4697-8e67-2f647399bd21,ResourceVersion:972464,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36db0 0x8f36db1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36e20} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.642: INFO: Pod "nginx-deployment-7b8c6f4498-qdnmw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qdnmw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-qdnmw,UID:c3c0e3fc-134b-4759-85ac-b62f7330de48,ResourceVersion:972437,Generation:0,CreationTimestamp:2020-08-19 03:20:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f36f00 0x8f36f01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f36f70} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f36f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.643: INFO: Pod "nginx-deployment-7b8c6f4498-qzfv2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qzfv2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-qzfv2,UID:b73a3ea4-e2fc-4315-b80c-68d3929c7787,ResourceVersion:972295,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f37060 0x8f37061}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f370d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f370f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.40,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d60ea3f38e4b1acb3d05a1f53e5ab9388e0d3866601c005f0856648329704d23}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.644: INFO: Pod "nginx-deployment-7b8c6f4498-sv2hr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sv2hr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-sv2hr,UID:365afb1f-5a57-43ce-a516-b76adc1a8cc1,ResourceVersion:972281,Generation:0,CreationTimestamp:2020-08-19 03:20:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f371c0 0x8f371c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f37230} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f37250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.111,StartTime:2020-08-19 03:20:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 03:20:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://af4ec9e5ce2f89b20a3af5af0ac96b59b6352329cf614a5b894cefc0d29a729b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.645: INFO: Pod "nginx-deployment-7b8c6f4498-t29vb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t29vb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-t29vb,UID:b21604d8-ada6-48bb-9cec-6e67adc3791e,ResourceVersion:972425,Generation:0,CreationTimestamp:2020-08-19 03:20:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f37320 0x8f37321}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f37390} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f373b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.646: INFO: Pod "nginx-deployment-7b8c6f4498-v2t9x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v2t9x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-v2t9x,UID:4ec7c102-71d4-4386-8084-47d1a1e16a9e,ResourceVersion:972484,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f37470 0x8f37471}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f374e0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f37500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 03:20:19.647: INFO: Pod "nginx-deployment-7b8c6f4498-zmvdk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zmvdk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2235,SelfLink:/api/v1/namespaces/deployment-2235/pods/nginx-deployment-7b8c6f4498-zmvdk,UID:ba2daa80-dc01-4a6e-95e5-db0be2aa556e,ResourceVersion:972466,Generation:0,CreationTimestamp:2020-08-19 03:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d3a14e4c-459a-4e0e-a121-f1bc1f079aca 0x8f375c0 0x8f375c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vh9h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vh9h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vh9h8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8f37630} {node.kubernetes.io/unreachable Exists  NoExecute 0x8f37650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 03:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 03:20:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:20:19.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2235" for this suite.
Aug 19 03:20:44.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:20:44.472: INFO: namespace deployment-2235 deletion completed in 24.496094271s

• [SLOW TEST:43.973 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:20:44.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6578.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6578.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6578.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6578.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6578.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6578.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 03:20:54.886: INFO: DNS probes using dns-6578/dns-test-cba6367d-6830-49bc-9504-ab66925460fe succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:20:54.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6578" for this suite.
Aug 19 03:21:00.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:21:01.126: INFO: namespace dns-6578 deletion completed in 6.19034979s

• [SLOW TEST:16.653 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:21:01.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 19 03:21:01.211: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 19 03:21:01.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6784'
Aug 19 03:21:02.755: INFO: stderr: ""
Aug 19 03:21:02.755: INFO: stdout: "service/redis-slave created\n"
Aug 19 03:21:02.757: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 19 03:21:02.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6784'
Aug 19 03:21:04.267: INFO: stderr: ""
Aug 19 03:21:04.267: INFO: stdout: "service/redis-master created\n"
Aug 19 03:21:04.269: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 19 03:21:04.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6784'
Aug 19 03:21:05.841: INFO: stderr: ""
Aug 19 03:21:05.841: INFO: stdout: "service/frontend created\n"
Aug 19 03:21:05.843: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 19 03:21:05.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6784'
Aug 19 03:21:07.382: INFO: stderr: ""
Aug 19 03:21:07.382: INFO: stdout: "deployment.apps/frontend created\n"
Aug 19 03:21:07.384: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 19 03:21:07.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6784'
Aug 19 03:21:09.006: INFO: stderr: ""
Aug 19 03:21:09.006: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 19 03:21:09.007: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 19 03:21:09.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6784'
Aug 19 03:21:11.241: INFO: stderr: ""
Aug 19 03:21:11.241: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 19 03:21:11.241: INFO: Waiting for all frontend pods to be Running.
Aug 19 03:21:16.293: INFO: Waiting for frontend to serve content.
Aug 19 03:21:17.364: INFO: Trying to add a new entry to the guestbook.
Aug 19 03:21:17.422: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 19 03:21:17.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6784'
Aug 19 03:21:18.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:21:18.630: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 03:21:18.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6784'
Aug 19 03:21:19.819: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:21:19.820: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 03:21:19.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6784'
Aug 19 03:21:21.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:21:21.254: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 03:21:21.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6784'
Aug 19 03:21:22.403: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:21:22.404: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 03:21:22.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6784'
Aug 19 03:21:23.982: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:21:23.982: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 03:21:23.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6784'
Aug 19 03:21:25.146: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:21:25.146: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:21:25.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6784" for this suite.
Aug 19 03:22:05.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:22:05.516: INFO: namespace kubectl-6784 deletion completed in 40.360963525s

• [SLOW TEST:64.388 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:22:05.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6020.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6020.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6020.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6020.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6020.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.229.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.229.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.229.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.229.146_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6020.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6020.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6020.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6020.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6020.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6020.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.229.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.229.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.229.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.229.146_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 03:22:13.851: INFO: Unable to read wheezy_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.856: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.865: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.870: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.899: INFO: Unable to read jessie_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.904: INFO: Unable to read jessie_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.909: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:13.957: INFO: Lookups using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 failed for: [wheezy_udp@dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_udp@dns-test-service.dns-6020.svc.cluster.local jessie_tcp@dns-test-service.dns-6020.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local]

Aug 19 03:22:18.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:18.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:18.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:18.979: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:19.005: INFO: Unable to read jessie_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:19.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:19.012: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:19.016: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:19.042: INFO: Lookups using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 failed for: [wheezy_udp@dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_udp@dns-test-service.dns-6020.svc.cluster.local jessie_tcp@dns-test-service.dns-6020.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local]

Aug 19 03:22:23.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:23.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:23.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:23.980: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:24.029: INFO: Unable to read jessie_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:24.033: INFO: Unable to read jessie_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:24.038: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:24.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:24.072: INFO: Lookups using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 failed for: [wheezy_udp@dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_udp@dns-test-service.dns-6020.svc.cluster.local jessie_tcp@dns-test-service.dns-6020.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local]

Aug 19 03:22:28.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:28.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:28.977: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:28.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:29.011: INFO: Unable to read jessie_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:29.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:29.020: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:29.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:29.051: INFO: Lookups using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 failed for: [wheezy_udp@dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_udp@dns-test-service.dns-6020.svc.cluster.local jessie_tcp@dns-test-service.dns-6020.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local]

Aug 19 03:22:33.964: INFO: Unable to read wheezy_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:33.969: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:33.973: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:33.978: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:34.089: INFO: Unable to read jessie_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:34.093: INFO: Unable to read jessie_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:34.097: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:34.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:34.124: INFO: Lookups using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 failed for: [wheezy_udp@dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_udp@dns-test-service.dns-6020.svc.cluster.local jessie_tcp@dns-test-service.dns-6020.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local]

Aug 19 03:22:38.963: INFO: Unable to read wheezy_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:38.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:38.970: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:38.974: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:38.996: INFO: Unable to read jessie_udp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:38.999: INFO: Unable to read jessie_tcp@dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:39.002: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:39.005: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local from pod dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2: the server could not find the requested resource (get pods dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2)
Aug 19 03:22:39.026: INFO: Lookups using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 failed for: [wheezy_udp@dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@dns-test-service.dns-6020.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_udp@dns-test-service.dns-6020.svc.cluster.local jessie_tcp@dns-test-service.dns-6020.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6020.svc.cluster.local]

Aug 19 03:22:44.107: INFO: DNS probes using dns-6020/dns-test-70f77029-c5b4-45c5-aa7d-f53b38ec43b2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:22:44.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6020" for this suite.
Aug 19 03:22:50.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:22:50.971: INFO: namespace dns-6020 deletion completed in 6.128879271s

• [SLOW TEST:45.453 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:22:50.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 19 03:22:59.094: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:22:59.101: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:01.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:01.114: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:03.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:03.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:05.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:05.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:07.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:07.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:09.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:09.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:11.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:11.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:13.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:13.108: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:15.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:15.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:17.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:17.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:19.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:19.109: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:21.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:21.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:23.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:23.110: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 19 03:23:25.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 19 03:23:25.110: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:23:25.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9111" for this suite.
Aug 19 03:23:47.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:23:47.257: INFO: namespace container-lifecycle-hook-9111 deletion completed in 22.138923881s

• [SLOW TEST:56.283 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:23:47.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-667c9ca6-0b65-4430-b5ab-93c8de30687a
STEP: Creating a pod to test consume secrets
Aug 19 03:23:47.492: INFO: Waiting up to 5m0s for pod "pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58" in namespace "secrets-8263" to be "success or failure"
Aug 19 03:23:47.501: INFO: Pod "pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58": Phase="Pending", Reason="", readiness=false. Elapsed: 9.33051ms
Aug 19 03:23:49.523: INFO: Pod "pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031367858s
Aug 19 03:23:51.530: INFO: Pod "pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03749331s
Aug 19 03:23:53.537: INFO: Pod "pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045242972s
STEP: Saw pod success
Aug 19 03:23:53.538: INFO: Pod "pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58" satisfied condition "success or failure"
Aug 19 03:23:53.541: INFO: Trying to get logs from node iruya-worker pod pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58 container secret-volume-test: 
STEP: delete the pod
Aug 19 03:23:53.735: INFO: Waiting for pod pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58 to disappear
Aug 19 03:23:53.814: INFO: Pod pod-secrets-23ec0c23-bf52-47f8-9752-cdb302812a58 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:23:53.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8263" for this suite.
Aug 19 03:23:59.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:24:00.047: INFO: namespace secrets-8263 deletion completed in 6.223612974s
STEP: Destroying namespace "secret-namespace-9917" for this suite.
Aug 19 03:24:06.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:24:06.197: INFO: namespace secret-namespace-9917 deletion completed in 6.149882448s

• [SLOW TEST:18.940 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:24:06.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 19 03:24:06.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3421'
Aug 19 03:24:07.740: INFO: stderr: ""
Aug 19 03:24:07.741: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 03:24:07.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3421'
Aug 19 03:24:08.870: INFO: stderr: ""
Aug 19 03:24:08.870: INFO: stdout: "update-demo-nautilus-5gww2 update-demo-nautilus-vffgm "
Aug 19 03:24:08.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5gww2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:10.151: INFO: stderr: ""
Aug 19 03:24:10.151: INFO: stdout: ""
Aug 19 03:24:10.151: INFO: update-demo-nautilus-5gww2 is created but not running
Aug 19 03:24:15.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3421'
Aug 19 03:24:16.237: INFO: stderr: ""
Aug 19 03:24:16.237: INFO: stdout: "update-demo-nautilus-5gww2 update-demo-nautilus-vffgm "
Aug 19 03:24:16.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5gww2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:17.357: INFO: stderr: ""
Aug 19 03:24:17.357: INFO: stdout: "true"
Aug 19 03:24:17.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5gww2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:18.457: INFO: stderr: ""
Aug 19 03:24:18.457: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:24:18.457: INFO: validating pod update-demo-nautilus-5gww2
Aug 19 03:24:18.463: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:24:18.464: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:24:18.464: INFO: update-demo-nautilus-5gww2 is verified up and running
Aug 19 03:24:18.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vffgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:19.512: INFO: stderr: ""
Aug 19 03:24:19.513: INFO: stdout: "true"
Aug 19 03:24:19.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vffgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:20.587: INFO: stderr: ""
Aug 19 03:24:20.587: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:24:20.587: INFO: validating pod update-demo-nautilus-vffgm
Aug 19 03:24:20.593: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:24:20.593: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:24:20.593: INFO: update-demo-nautilus-vffgm is verified up and running
STEP: scaling down the replication controller
Aug 19 03:24:20.600: INFO: scanned /root for discovery docs: 
Aug 19 03:24:20.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3421'
Aug 19 03:24:22.867: INFO: stderr: ""
Aug 19 03:24:22.867: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 03:24:22.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3421'
Aug 19 03:24:24.017: INFO: stderr: ""
Aug 19 03:24:24.017: INFO: stdout: "update-demo-nautilus-5gww2 update-demo-nautilus-vffgm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 19 03:24:29.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3421'
Aug 19 03:24:30.184: INFO: stderr: ""
Aug 19 03:24:30.184: INFO: stdout: "update-demo-nautilus-5gww2 update-demo-nautilus-vffgm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 19 03:24:35.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3421'
Aug 19 03:24:36.302: INFO: stderr: ""
Aug 19 03:24:36.302: INFO: stdout: "update-demo-nautilus-vffgm "
Aug 19 03:24:36.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vffgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:37.365: INFO: stderr: ""
Aug 19 03:24:37.365: INFO: stdout: "true"
Aug 19 03:24:37.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vffgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:38.517: INFO: stderr: ""
Aug 19 03:24:38.517: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:24:38.517: INFO: validating pod update-demo-nautilus-vffgm
Aug 19 03:24:38.521: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:24:38.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:24:38.522: INFO: update-demo-nautilus-vffgm is verified up and running
STEP: scaling up the replication controller
Aug 19 03:24:38.529: INFO: scanned /root for discovery docs: 
Aug 19 03:24:38.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3421'
Aug 19 03:24:39.744: INFO: stderr: ""
Aug 19 03:24:39.745: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 03:24:39.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3421'
Aug 19 03:24:40.865: INFO: stderr: ""
Aug 19 03:24:40.865: INFO: stdout: "update-demo-nautilus-vffgm update-demo-nautilus-zm8wf "
Aug 19 03:24:40.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vffgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:42.010: INFO: stderr: ""
Aug 19 03:24:42.010: INFO: stdout: "true"
Aug 19 03:24:42.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vffgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:43.092: INFO: stderr: ""
Aug 19 03:24:43.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:24:43.092: INFO: validating pod update-demo-nautilus-vffgm
Aug 19 03:24:43.097: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:24:43.097: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:24:43.097: INFO: update-demo-nautilus-vffgm is verified up and running
Aug 19 03:24:43.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm8wf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:44.193: INFO: stderr: ""
Aug 19 03:24:44.193: INFO: stdout: "true"
Aug 19 03:24:44.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm8wf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3421'
Aug 19 03:24:45.286: INFO: stderr: ""
Aug 19 03:24:45.286: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:24:45.287: INFO: validating pod update-demo-nautilus-zm8wf
Aug 19 03:24:45.292: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:24:45.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:24:45.293: INFO: update-demo-nautilus-zm8wf is verified up and running
STEP: using delete to clean up resources
Aug 19 03:24:45.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3421'
Aug 19 03:24:46.346: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:24:46.347: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 19 03:24:46.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3421'
Aug 19 03:24:47.533: INFO: stderr: "No resources found.\n"
Aug 19 03:24:47.533: INFO: stdout: ""
Aug 19 03:24:47.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3421 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 03:24:48.746: INFO: stderr: ""
Aug 19 03:24:48.746: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:24:48.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3421" for this suite.
Aug 19 03:24:57.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:24:57.185: INFO: namespace kubectl-3421 deletion completed in 8.42906464s

• [SLOW TEST:50.987 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:24:57.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 03:24:57.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6104'
Aug 19 03:24:58.478: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 03:24:58.478: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 19 03:24:58.935: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-jsc2n]
Aug 19 03:24:58.935: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-jsc2n" in namespace "kubectl-6104" to be "running and ready"
Aug 19 03:24:59.032: INFO: Pod "e2e-test-nginx-rc-jsc2n": Phase="Pending", Reason="", readiness=false. Elapsed: 96.443575ms
Aug 19 03:25:01.050: INFO: Pod "e2e-test-nginx-rc-jsc2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114763906s
Aug 19 03:25:03.056: INFO: Pod "e2e-test-nginx-rc-jsc2n": Phase="Running", Reason="", readiness=true. Elapsed: 4.120658531s
Aug 19 03:25:03.056: INFO: Pod "e2e-test-nginx-rc-jsc2n" satisfied condition "running and ready"
Aug 19 03:25:03.056: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-jsc2n]
Aug 19 03:25:03.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6104'
Aug 19 03:25:04.282: INFO: stderr: ""
Aug 19 03:25:04.282: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug 19 03:25:04.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6104'
Aug 19 03:25:05.681: INFO: stderr: ""
Aug 19 03:25:05.681: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:25:05.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6104" for this suite.
Aug 19 03:25:17.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:25:18.004: INFO: namespace kubectl-6104 deletion completed in 12.312735679s

• [SLOW TEST:20.816 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:25:18.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:25:18.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74" in namespace "downward-api-971" to be "success or failure"
Aug 19 03:25:18.133: INFO: Pod "downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74": Phase="Pending", Reason="", readiness=false. Elapsed: 10.302279ms
Aug 19 03:25:20.139: INFO: Pod "downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016939956s
Aug 19 03:25:22.147: INFO: Pod "downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024179271s
STEP: Saw pod success
Aug 19 03:25:22.147: INFO: Pod "downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74" satisfied condition "success or failure"
Aug 19 03:25:22.152: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74 container client-container: 
STEP: delete the pod
Aug 19 03:25:22.208: INFO: Waiting for pod downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74 to disappear
Aug 19 03:25:22.211: INFO: Pod downwardapi-volume-0f6e9d2e-66e1-4b96-badf-043e09ac0c74 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:25:22.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-971" for this suite.
Aug 19 03:25:28.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:25:28.360: INFO: namespace downward-api-971 deletion completed in 6.1389432s

• [SLOW TEST:10.355 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:25:28.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:25:35.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1618" for this suite.
Aug 19 03:26:15.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:26:16.140: INFO: namespace kubelet-test-1618 deletion completed in 40.173725284s

• [SLOW TEST:47.776 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:26:16.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 19 03:26:16.410: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:26:33.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1933" for this suite.
Aug 19 03:26:39.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:26:39.892: INFO: namespace pods-1933 deletion completed in 6.166827602s

• [SLOW TEST:23.750 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:26:39.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f371e156-9e27-4ef7-b0db-e34521266769
STEP: Creating a pod to test consume configMaps
Aug 19 03:26:40.141: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe" in namespace "projected-6137" to be "success or failure"
Aug 19 03:26:40.176: INFO: Pod "pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 34.891667ms
Aug 19 03:26:42.208: INFO: Pod "pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067259566s
Aug 19 03:26:44.605: INFO: Pod "pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463723641s
Aug 19 03:26:46.611: INFO: Pod "pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.470199932s
STEP: Saw pod success
Aug 19 03:26:46.611: INFO: Pod "pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe" satisfied condition "success or failure"
Aug 19 03:26:46.639: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 03:26:46.834: INFO: Waiting for pod pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe to disappear
Aug 19 03:26:46.889: INFO: Pod pod-projected-configmaps-0634540e-fc35-43d6-aef0-b0d331e0cfbe no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:26:46.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6137" for this suite.
Aug 19 03:26:53.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:26:53.630: INFO: namespace projected-6137 deletion completed in 6.730652535s

• [SLOW TEST:13.736 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:26:53.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 19 03:27:00.877: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f9ee8ad3-3b44-4141-a937-f504d6c9f32c"
Aug 19 03:27:00.877: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f9ee8ad3-3b44-4141-a937-f504d6c9f32c" in namespace "pods-2085" to be "terminated due to deadline exceeded"
Aug 19 03:27:01.317: INFO: Pod "pod-update-activedeadlineseconds-f9ee8ad3-3b44-4141-a937-f504d6c9f32c": Phase="Running", Reason="", readiness=true. Elapsed: 439.537118ms
Aug 19 03:27:03.324: INFO: Pod "pod-update-activedeadlineseconds-f9ee8ad3-3b44-4141-a937-f504d6c9f32c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.446425905s
Aug 19 03:27:03.325: INFO: Pod "pod-update-activedeadlineseconds-f9ee8ad3-3b44-4141-a937-f504d6c9f32c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:27:03.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2085" for this suite.
Aug 19 03:27:09.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:27:09.715: INFO: namespace pods-2085 deletion completed in 6.376820868s

• [SLOW TEST:16.084 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:27:09.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-0c98de9d-0543-430b-b8be-0a73b886aa7f
STEP: Creating a pod to test consume secrets
Aug 19 03:27:10.278: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345" in namespace "projected-11" to be "success or failure"
Aug 19 03:27:10.364: INFO: Pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345": Phase="Pending", Reason="", readiness=false. Elapsed: 84.944974ms
Aug 19 03:27:12.371: INFO: Pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092437794s
Aug 19 03:27:14.378: INFO: Pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09911499s
Aug 19 03:27:16.516: INFO: Pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345": Phase="Running", Reason="", readiness=true. Elapsed: 6.237468026s
Aug 19 03:27:18.522: INFO: Pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.24358703s
STEP: Saw pod success
Aug 19 03:27:18.523: INFO: Pod "pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345" satisfied condition "success or failure"
Aug 19 03:27:18.527: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 03:27:18.666: INFO: Waiting for pod pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345 to disappear
Aug 19 03:27:18.705: INFO: Pod pod-projected-secrets-9a2067c5-8bac-444a-ba56-5651cfa5d345 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:27:18.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-11" for this suite.
Aug 19 03:27:26.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:27:27.031: INFO: namespace projected-11 deletion completed in 8.31532004s

• [SLOW TEST:17.314 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:27:27.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 19 03:27:27.371: INFO: Waiting up to 5m0s for pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8" in namespace "emptydir-1603" to be "success or failure"
Aug 19 03:27:27.396: INFO: Pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.229477ms
Aug 19 03:27:29.695: INFO: Pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323952655s
Aug 19 03:27:31.701: INFO: Pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329708866s
Aug 19 03:27:33.708: INFO: Pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336352958s
Aug 19 03:27:35.714: INFO: Pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.342711181s
STEP: Saw pod success
Aug 19 03:27:35.714: INFO: Pod "pod-4ad94abc-e796-4807-992c-dd841c4341f8" satisfied condition "success or failure"
Aug 19 03:27:35.737: INFO: Trying to get logs from node iruya-worker2 pod pod-4ad94abc-e796-4807-992c-dd841c4341f8 container test-container: 
STEP: delete the pod
Aug 19 03:27:35.783: INFO: Waiting for pod pod-4ad94abc-e796-4807-992c-dd841c4341f8 to disappear
Aug 19 03:27:35.807: INFO: Pod pod-4ad94abc-e796-4807-992c-dd841c4341f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:27:35.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1603" for this suite.
Aug 19 03:27:41.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:27:42.074: INFO: namespace emptydir-1603 deletion completed in 6.21707115s

• [SLOW TEST:15.041 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:27:42.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 03:27:49.740: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:27:49.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7232" for this suite.
Aug 19 03:27:56.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:27:56.251: INFO: namespace container-runtime-7232 deletion completed in 6.255639975s

• [SLOW TEST:14.174 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:27:56.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-69zv
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 03:27:56.429: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-69zv" in namespace "subpath-6300" to be "success or failure"
Aug 19 03:27:56.440: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.472549ms
Aug 19 03:27:58.504: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075589766s
Aug 19 03:28:00.511: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082162799s
Aug 19 03:28:02.520: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 6.091039472s
Aug 19 03:28:04.527: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 8.098134474s
Aug 19 03:28:06.539: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 10.110376583s
Aug 19 03:28:08.683: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 12.253714366s
Aug 19 03:28:10.689: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 14.260077575s
Aug 19 03:28:12.696: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 16.266822876s
Aug 19 03:28:14.828: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 18.399095498s
Aug 19 03:28:16.869: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 20.440633341s
Aug 19 03:28:19.082: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 22.653511316s
Aug 19 03:28:21.089: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Running", Reason="", readiness=true. Elapsed: 24.66063793s
Aug 19 03:28:23.096: INFO: Pod "pod-subpath-test-configmap-69zv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.666716571s
STEP: Saw pod success
Aug 19 03:28:23.096: INFO: Pod "pod-subpath-test-configmap-69zv" satisfied condition "success or failure"
Aug 19 03:28:23.099: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-69zv container test-container-subpath-configmap-69zv: 
STEP: delete the pod
Aug 19 03:28:23.138: INFO: Waiting for pod pod-subpath-test-configmap-69zv to disappear
Aug 19 03:28:23.154: INFO: Pod pod-subpath-test-configmap-69zv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-69zv
Aug 19 03:28:23.155: INFO: Deleting pod "pod-subpath-test-configmap-69zv" in namespace "subpath-6300"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:28:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6300" for this suite.
Aug 19 03:28:29.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:28:29.854: INFO: namespace subpath-6300 deletion completed in 6.688202512s

• [SLOW TEST:33.602 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:28:29.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8935/configmap-test-66f7bed8-b6cd-43a1-9229-3821f51e0ef0
STEP: Creating a pod to test consume configMaps
Aug 19 03:28:30.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c" in namespace "configmap-8935" to be "success or failure"
Aug 19 03:28:30.570: INFO: Pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 73.818223ms
Aug 19 03:28:32.575: INFO: Pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078659948s
Aug 19 03:28:34.582: INFO: Pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085053501s
Aug 19 03:28:36.895: INFO: Pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398316935s
Aug 19 03:28:38.903: INFO: Pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.406022845s
STEP: Saw pod success
Aug 19 03:28:38.903: INFO: Pod "pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c" satisfied condition "success or failure"
Aug 19 03:28:38.929: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c container env-test: 
STEP: delete the pod
Aug 19 03:28:38.964: INFO: Waiting for pod pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c to disappear
Aug 19 03:28:38.973: INFO: Pod pod-configmaps-b638ea16-4454-4929-8a06-ede63b882a3c no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:28:38.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8935" for this suite.
Aug 19 03:28:45.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:28:45.213: INFO: namespace configmap-8935 deletion completed in 6.230308107s

• [SLOW TEST:15.357 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:28:45.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-4989097c-e552-4755-988b-6ec825e17eb2
STEP: Creating a pod to test consume secrets
Aug 19 03:28:45.342: INFO: Waiting up to 5m0s for pod "pod-secrets-6c699951-d537-43ec-9954-e645350b42d4" in namespace "secrets-6393" to be "success or failure"
Aug 19 03:28:45.391: INFO: Pod "pod-secrets-6c699951-d537-43ec-9954-e645350b42d4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.664369ms
Aug 19 03:28:47.511: INFO: Pod "pod-secrets-6c699951-d537-43ec-9954-e645350b42d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168573384s
Aug 19 03:28:49.570: INFO: Pod "pod-secrets-6c699951-d537-43ec-9954-e645350b42d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228335713s
Aug 19 03:28:51.578: INFO: Pod "pod-secrets-6c699951-d537-43ec-9954-e645350b42d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.235668922s
STEP: Saw pod success
Aug 19 03:28:51.578: INFO: Pod "pod-secrets-6c699951-d537-43ec-9954-e645350b42d4" satisfied condition "success or failure"
Aug 19 03:28:51.585: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6c699951-d537-43ec-9954-e645350b42d4 container secret-volume-test: 
STEP: delete the pod
Aug 19 03:28:51.617: INFO: Waiting for pod pod-secrets-6c699951-d537-43ec-9954-e645350b42d4 to disappear
Aug 19 03:28:51.632: INFO: Pod pod-secrets-6c699951-d537-43ec-9954-e645350b42d4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:28:51.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6393" for this suite.
Aug 19 03:28:59.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:29:00.219: INFO: namespace secrets-6393 deletion completed in 8.57491528s

• [SLOW TEST:15.004 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:29:00.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 19 03:29:00.323: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 19 03:29:07.635: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 19 03:29:10.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404548, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 03:29:12.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404548, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 03:29:14.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404548, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733404547, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 03:29:17.339: INFO: Waited 633.231674ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:29:18.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6373" for this suite.
Aug 19 03:29:26.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:29:26.965: INFO: namespace aggregator-6373 deletion completed in 8.172215081s

• [SLOW TEST:26.744 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:29:26.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 19 03:29:27.051: INFO: Waiting up to 5m0s for pod "pod-8c15d08c-0520-40b9-bc59-7483dedd3af6" in namespace "emptydir-9413" to be "success or failure"
Aug 19 03:29:27.099: INFO: Pod "pod-8c15d08c-0520-40b9-bc59-7483dedd3af6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.750701ms
Aug 19 03:29:29.236: INFO: Pod "pod-8c15d08c-0520-40b9-bc59-7483dedd3af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184388151s
Aug 19 03:29:31.243: INFO: Pod "pod-8c15d08c-0520-40b9-bc59-7483dedd3af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191338809s
Aug 19 03:29:33.250: INFO: Pod "pod-8c15d08c-0520-40b9-bc59-7483dedd3af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.198518249s
STEP: Saw pod success
Aug 19 03:29:33.251: INFO: Pod "pod-8c15d08c-0520-40b9-bc59-7483dedd3af6" satisfied condition "success or failure"
Aug 19 03:29:33.259: INFO: Trying to get logs from node iruya-worker2 pod pod-8c15d08c-0520-40b9-bc59-7483dedd3af6 container test-container: 
STEP: delete the pod
Aug 19 03:29:33.291: INFO: Waiting for pod pod-8c15d08c-0520-40b9-bc59-7483dedd3af6 to disappear
Aug 19 03:29:33.595: INFO: Pod pod-8c15d08c-0520-40b9-bc59-7483dedd3af6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:29:33.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9413" for this suite.
Aug 19 03:29:39.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:29:40.051: INFO: namespace emptydir-9413 deletion completed in 6.442605541s

• [SLOW TEST:13.085 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:29:40.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 03:29:40.361: INFO: Waiting up to 5m0s for pod "downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79" in namespace "downward-api-9581" to be "success or failure"
Aug 19 03:29:40.476: INFO: Pod "downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79": Phase="Pending", Reason="", readiness=false. Elapsed: 114.42465ms
Aug 19 03:29:42.608: INFO: Pod "downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246868097s
Aug 19 03:29:44.614: INFO: Pod "downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79": Phase="Running", Reason="", readiness=true. Elapsed: 4.253097555s
Aug 19 03:29:46.622: INFO: Pod "downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260590635s
STEP: Saw pod success
Aug 19 03:29:46.622: INFO: Pod "downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79" satisfied condition "success or failure"
Aug 19 03:29:46.627: INFO: Trying to get logs from node iruya-worker pod downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79 container dapi-container: 
STEP: delete the pod
Aug 19 03:29:46.668: INFO: Waiting for pod downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79 to disappear
Aug 19 03:29:46.675: INFO: Pod downward-api-af4ab993-c422-4d3e-a1be-fe8f3e6eeb79 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:29:46.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9581" for this suite.
Aug 19 03:29:52.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:29:52.814: INFO: namespace downward-api-9581 deletion completed in 6.128661098s

• [SLOW TEST:12.762 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:29:52.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2803
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 19 03:29:52.903: INFO: Found 0 stateful pods, waiting for 3
Aug 19 03:30:02.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:30:02.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:30:02.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 19 03:30:12.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:30:12.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:30:12.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:30:12.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2803 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 03:30:18.726: INFO: stderr: "I0819 03:30:18.591902    2899 log.go:172] (0x28dc000) (0x28dc0e0) Create stream\nI0819 03:30:18.594058    2899 log.go:172] (0x28dc000) (0x28dc0e0) Stream added, broadcasting: 1\nI0819 03:30:18.604033    2899 log.go:172] (0x28dc000) Reply frame received for 1\nI0819 03:30:18.605038    2899 log.go:172] (0x28dc000) (0x28dc1c0) Create stream\nI0819 03:30:18.605152    2899 log.go:172] (0x28dc000) (0x28dc1c0) Stream added, broadcasting: 3\nI0819 03:30:18.607011    2899 log.go:172] (0x28dc000) Reply frame received for 3\nI0819 03:30:18.607225    2899 log.go:172] (0x28dc000) (0x28dc230) Create stream\nI0819 03:30:18.607278    2899 log.go:172] (0x28dc000) (0x28dc230) Stream added, broadcasting: 5\nI0819 03:30:18.608357    2899 log.go:172] (0x28dc000) Reply frame received for 5\nI0819 03:30:18.668794    2899 log.go:172] (0x28dc000) Data frame received for 5\nI0819 03:30:18.668938    2899 log.go:172] (0x28dc230) (5) Data frame handling\nI0819 03:30:18.669193    2899 log.go:172] (0x28dc230) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 03:30:18.706391    2899 log.go:172] (0x28dc000) Data frame received for 3\nI0819 03:30:18.706553    2899 log.go:172] (0x28dc000) Data frame received for 5\nI0819 03:30:18.706722    2899 log.go:172] (0x28dc230) (5) Data frame handling\nI0819 03:30:18.706986    2899 log.go:172] (0x28dc1c0) (3) Data frame handling\nI0819 03:30:18.707202    2899 log.go:172] (0x28dc1c0) (3) Data frame sent\nI0819 03:30:18.707385    2899 log.go:172] (0x28dc000) Data frame received for 3\nI0819 03:30:18.707532    2899 log.go:172] (0x28dc1c0) (3) Data frame handling\nI0819 03:30:18.707970    2899 log.go:172] (0x28dc000) Data frame received for 1\nI0819 03:30:18.708143    2899 log.go:172] (0x28dc0e0) (1) Data frame handling\nI0819 03:30:18.708355    2899 log.go:172] (0x28dc0e0) (1) Data frame sent\nI0819 03:30:18.709595    2899 log.go:172] (0x28dc000) (0x28dc0e0) Stream removed, broadcasting: 1\nI0819 03:30:18.711640    2899 log.go:172] (0x28dc000) Go away received\nI0819 03:30:18.714591    2899 log.go:172] (0x28dc000) (0x28dc0e0) Stream removed, broadcasting: 1\nI0819 03:30:18.715020    2899 log.go:172] (0x28dc000) (0x28dc1c0) Stream removed, broadcasting: 3\nI0819 03:30:18.715311    2899 log.go:172] (0x28dc000) (0x28dc230) Stream removed, broadcasting: 5\n"
Aug 19 03:30:18.727: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 03:30:18.728: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 19 03:30:29.083: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 19 03:30:39.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2803 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 03:30:40.501: INFO: stderr: "I0819 03:30:40.383603    2929 log.go:172] (0x2ae7810) (0x2ae7880) Create stream\nI0819 03:30:40.385970    2929 log.go:172] (0x2ae7810) (0x2ae7880) Stream added, broadcasting: 1\nI0819 03:30:40.396523    2929 log.go:172] (0x2ae7810) Reply frame received for 1\nI0819 03:30:40.397066    2929 log.go:172] (0x2ae7810) (0x2ae78f0) Create stream\nI0819 03:30:40.397128    2929 log.go:172] (0x2ae7810) (0x2ae78f0) Stream added, broadcasting: 3\nI0819 03:30:40.398547    2929 log.go:172] (0x2ae7810) Reply frame received for 3\nI0819 03:30:40.398940    2929 log.go:172] (0x2ae7810) (0x29d6000) Create stream\nI0819 03:30:40.399040    2929 log.go:172] (0x2ae7810) (0x29d6000) Stream added, broadcasting: 5\nI0819 03:30:40.400719    2929 log.go:172] (0x2ae7810) Reply frame received for 5\nI0819 03:30:40.486436    2929 log.go:172] (0x2ae7810) Data frame received for 5\nI0819 03:30:40.486602    2929 log.go:172] (0x29d6000) (5) Data frame handling\nI0819 03:30:40.486784    2929 log.go:172] (0x2ae7810) Data frame received for 3\nI0819 03:30:40.486964    2929 log.go:172] (0x2ae78f0) (3) Data frame handling\nI0819 03:30:40.487102    2929 log.go:172] (0x29d6000) (5) Data frame sent\nI0819 03:30:40.487258    2929 log.go:172] (0x2ae78f0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 03:30:40.487976    2929 log.go:172] (0x2ae7810) Data frame received for 3\nI0819 03:30:40.488089    2929 log.go:172] (0x2ae7810) Data frame received for 1\nI0819 03:30:40.488350    2929 log.go:172] (0x2ae7880) (1) Data frame handling\nI0819 03:30:40.488437    2929 log.go:172] (0x2ae7880) (1) Data frame sent\nI0819 03:30:40.489396    2929 log.go:172] (0x2ae7810) Data frame received for 5\nI0819 03:30:40.489559    2929 log.go:172] (0x29d6000) (5) Data frame handling\nI0819 03:30:40.490385    2929 log.go:172] (0x2ae78f0) (3) Data frame handling\nI0819 03:30:40.490708    2929 log.go:172] (0x2ae7810) (0x2ae7880) Stream removed, broadcasting: 1\nI0819 03:30:40.491265    2929 log.go:172] (0x2ae7810) Go away received\nI0819 03:30:40.493110    2929 log.go:172] (0x2ae7810) (0x2ae7880) Stream removed, broadcasting: 1\nI0819 03:30:40.493260    2929 log.go:172] (0x2ae7810) (0x2ae78f0) Stream removed, broadcasting: 3\nI0819 03:30:40.493379    2929 log.go:172] (0x2ae7810) (0x29d6000) Stream removed, broadcasting: 5\n"
Aug 19 03:30:40.502: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 03:30:40.502: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 03:30:50.530: INFO: Waiting for StatefulSet statefulset-2803/ss2 to complete update
Aug 19 03:30:50.530: INFO: Waiting for Pod statefulset-2803/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:30:50.530: INFO: Waiting for Pod statefulset-2803/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:30:50.530: INFO: Waiting for Pod statefulset-2803/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:31:00.541: INFO: Waiting for StatefulSet statefulset-2803/ss2 to complete update
Aug 19 03:31:00.541: INFO: Waiting for Pod statefulset-2803/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:31:00.541: INFO: Waiting for Pod statefulset-2803/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:31:11.310: INFO: Waiting for StatefulSet statefulset-2803/ss2 to complete update
Aug 19 03:31:11.310: INFO: Waiting for Pod statefulset-2803/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug 19 03:31:20.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2803 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 03:31:21.942: INFO: stderr: "I0819 03:31:21.784375    2948 log.go:172] (0x2b7e8c0) (0x2b7e930) Create stream\nI0819 03:31:21.786634    2948 log.go:172] (0x2b7e8c0) (0x2b7e930) Stream added, broadcasting: 1\nI0819 03:31:21.803957    2948 log.go:172] (0x2b7e8c0) Reply frame received for 1\nI0819 03:31:21.804406    2948 log.go:172] (0x2b7e8c0) (0x2414070) Create stream\nI0819 03:31:21.804475    2948 log.go:172] (0x2b7e8c0) (0x2414070) Stream added, broadcasting: 3\nI0819 03:31:21.806160    2948 log.go:172] (0x2b7e8c0) Reply frame received for 3\nI0819 03:31:21.806661    2948 log.go:172] (0x2b7e8c0) (0x241a0e0) Create stream\nI0819 03:31:21.806805    2948 log.go:172] (0x2b7e8c0) (0x241a0e0) Stream added, broadcasting: 5\nI0819 03:31:21.808408    2948 log.go:172] (0x2b7e8c0) Reply frame received for 5\nI0819 03:31:21.887536    2948 log.go:172] (0x2b7e8c0) Data frame received for 5\nI0819 03:31:21.887941    2948 log.go:172] (0x241a0e0) (5) Data frame handling\nI0819 03:31:21.888589    2948 log.go:172] (0x241a0e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 03:31:21.926613    2948 log.go:172] (0x2b7e8c0) Data frame received for 3\nI0819 03:31:21.926802    2948 log.go:172] (0x2414070) (3) Data frame handling\nI0819 03:31:21.927016    2948 log.go:172] (0x2b7e8c0) Data frame received for 5\nI0819 03:31:21.927266    2948 log.go:172] (0x241a0e0) (5) Data frame handling\nI0819 03:31:21.927754    2948 log.go:172] (0x2414070) (3) Data frame sent\nI0819 03:31:21.927947    2948 log.go:172] (0x2b7e8c0) Data frame received for 3\nI0819 03:31:21.928136    2948 log.go:172] (0x2414070) (3) Data frame handling\nI0819 03:31:21.928338    2948 log.go:172] (0x2b7e8c0) Data frame received for 1\nI0819 03:31:21.928535    2948 log.go:172] (0x2b7e930) (1) Data frame handling\nI0819 03:31:21.928842    2948 log.go:172] (0x2b7e930) (1) Data frame sent\nI0819 03:31:21.929889    2948 log.go:172] (0x2b7e8c0) (0x2b7e930) Stream removed, broadcasting: 1\nI0819 03:31:21.932856    2948 log.go:172] (0x2b7e8c0) Go away received\nI0819 03:31:21.934654    2948 log.go:172] (0x2b7e8c0) (0x2b7e930) Stream removed, broadcasting: 1\nI0819 03:31:21.935193    2948 log.go:172] (0x2b7e8c0) (0x2414070) Stream removed, broadcasting: 3\nI0819 03:31:21.935490    2948 log.go:172] (0x2b7e8c0) (0x241a0e0) Stream removed, broadcasting: 5\n"
Aug 19 03:31:21.943: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 03:31:21.943: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 03:31:31.987: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 19 03:31:42.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2803 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 03:31:43.893: INFO: stderr: "I0819 03:31:43.801810    2970 log.go:172] (0x266aee0) (0x266b420) Create stream\nI0819 03:31:43.805166    2970 log.go:172] (0x266aee0) (0x266b420) Stream added, broadcasting: 1\nI0819 03:31:43.814948    2970 log.go:172] (0x266aee0) Reply frame received for 1\nI0819 03:31:43.815441    2970 log.go:172] (0x266aee0) (0x281a700) Create stream\nI0819 03:31:43.815510    2970 log.go:172] (0x266aee0) (0x281a700) Stream added, broadcasting: 3\nI0819 03:31:43.816682    2970 log.go:172] (0x266aee0) Reply frame received for 3\nI0819 03:31:43.817029    2970 log.go:172] (0x266aee0) (0x266bea0) Create stream\nI0819 03:31:43.817103    2970 log.go:172] (0x266aee0) (0x266bea0) Stream added, broadcasting: 5\nI0819 03:31:43.818236    2970 log.go:172] (0x266aee0) Reply frame received for 5\nI0819 03:31:43.875948    2970 log.go:172] (0x266aee0) Data frame received for 5\nI0819 03:31:43.876226    2970 log.go:172] (0x266aee0) Data frame received for 3\nI0819 03:31:43.876341    2970 log.go:172] (0x281a700) (3) Data frame handling\nI0819 03:31:43.876529    2970 log.go:172] (0x266bea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 03:31:43.878230    2970 log.go:172] (0x266bea0) (5) Data frame sent\nI0819 03:31:43.878402    2970 log.go:172] (0x281a700) (3) Data frame sent\nI0819 03:31:43.878692    2970 log.go:172] (0x266aee0) Data frame received for 3\nI0819 03:31:43.878817    2970 log.go:172] (0x281a700) (3) Data frame handling\nI0819 03:31:43.878899    2970 log.go:172] (0x266aee0) Data frame received for 1\nI0819 03:31:43.879013    2970 log.go:172] (0x266b420) (1) Data frame handling\nI0819 03:31:43.879121    2970 log.go:172] (0x266aee0) Data frame received for 5\nI0819 03:31:43.879234    2970 log.go:172] (0x266bea0) (5) Data frame handling\nI0819 03:31:43.879312    2970 log.go:172] (0x266b420) (1) Data frame sent\nI0819 03:31:43.880633    2970 log.go:172] (0x266aee0) (0x266b420) Stream removed, broadcasting: 1\nI0819 03:31:43.882782    2970 log.go:172] (0x266aee0) (0x266b420) Stream removed, broadcasting: 1\nI0819 03:31:43.882950    2970 log.go:172] (0x266aee0) (0x281a700) Stream removed, broadcasting: 3\nI0819 03:31:43.884025    2970 log.go:172] (0x266aee0) (0x266bea0) Stream removed, broadcasting: 5\n"
Aug 19 03:31:43.894: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 03:31:43.894: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 03:31:53.971: INFO: Waiting for StatefulSet statefulset-2803/ss2 to complete update
Aug 19 03:31:53.971: INFO: Waiting for Pod statefulset-2803/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 19 03:31:53.971: INFO: Waiting for Pod statefulset-2803/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 19 03:32:04.130: INFO: Waiting for StatefulSet statefulset-2803/ss2 to complete update
Aug 19 03:32:04.131: INFO: Waiting for Pod statefulset-2803/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 19 03:32:14.152: INFO: Waiting for StatefulSet statefulset-2803/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 03:32:23.985: INFO: Deleting all statefulset in ns statefulset-2803
Aug 19 03:32:23.989: INFO: Scaling statefulset ss2 to 0
Aug 19 03:32:44.013: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 03:32:44.018: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:32:44.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2803" for this suite.
Aug 19 03:32:52.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:32:52.215: INFO: namespace statefulset-2803 deletion completed in 8.157929163s

• [SLOW TEST:179.399 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:32:52.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7794
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 03:32:52.483: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 03:33:23.020: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.76:8080/dial?request=hostName&protocol=http&host=10.244.1.75&port=8080&tries=1'] Namespace:pod-network-test-7794 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:33:23.020: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:33:23.113193       7 log.go:172] (0x95dcd90) (0x95dcf50) Create stream
I0819 03:33:23.113316       7 log.go:172] (0x95dcd90) (0x95dcf50) Stream added, broadcasting: 1
I0819 03:33:23.116099       7 log.go:172] (0x95dcd90) Reply frame received for 1
I0819 03:33:23.116270       7 log.go:172] (0x95dcd90) (0x95dd110) Create stream
I0819 03:33:23.116368       7 log.go:172] (0x95dcd90) (0x95dd110) Stream added, broadcasting: 3
I0819 03:33:23.117937       7 log.go:172] (0x95dcd90) Reply frame received for 3
I0819 03:33:23.118073       7 log.go:172] (0x95dcd90) (0x95dd2d0) Create stream
I0819 03:33:23.118138       7 log.go:172] (0x95dcd90) (0x95dd2d0) Stream added, broadcasting: 5
I0819 03:33:23.119415       7 log.go:172] (0x95dcd90) Reply frame received for 5
I0819 03:33:23.195387       7 log.go:172] (0x95dcd90) Data frame received for 3
I0819 03:33:23.195602       7 log.go:172] (0x95dd110) (3) Data frame handling
I0819 03:33:23.195782       7 log.go:172] (0x95dcd90) Data frame received for 5
I0819 03:33:23.195968       7 log.go:172] (0x95dd2d0) (5) Data frame handling
I0819 03:33:23.196085       7 log.go:172] (0x95dd110) (3) Data frame sent
I0819 03:33:23.196248       7 log.go:172] (0x95dcd90) Data frame received for 3
I0819 03:33:23.196354       7 log.go:172] (0x95dd110) (3) Data frame handling
I0819 03:33:23.197058       7 log.go:172] (0x95dcd90) Data frame received for 1
I0819 03:33:23.197226       7 log.go:172] (0x95dcf50) (1) Data frame handling
I0819 03:33:23.197356       7 log.go:172] (0x95dcf50) (1) Data frame sent
I0819 03:33:23.197509       7 log.go:172] (0x95dcd90) (0x95dcf50) Stream removed, broadcasting: 1
I0819 03:33:23.197660       7 log.go:172] (0x95dcd90) Go away received
I0819 03:33:23.197888       7 log.go:172] (0x95dcd90) (0x95dcf50) Stream removed, broadcasting: 1
I0819 03:33:23.197956       7 log.go:172] (0x95dcd90) (0x95dd110) Stream removed, broadcasting: 3
I0819 03:33:23.198024       7 log.go:172] (0x95dcd90) (0x95dd2d0) Stream removed, broadcasting: 5
Aug 19 03:33:23.198: INFO: Waiting for endpoints: map[]
Aug 19 03:33:23.203: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.76:8080/dial?request=hostName&protocol=http&host=10.244.2.149&port=8080&tries=1'] Namespace:pod-network-test-7794 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:33:23.203: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:33:23.301227       7 log.go:172] (0x85fb260) (0x85fb340) Create stream
I0819 03:33:23.301408       7 log.go:172] (0x85fb260) (0x85fb340) Stream added, broadcasting: 1
I0819 03:33:23.304627       7 log.go:172] (0x85fb260) Reply frame received for 1
I0819 03:33:23.304821       7 log.go:172] (0x85fb260) (0x85fb420) Create stream
I0819 03:33:23.304893       7 log.go:172] (0x85fb260) (0x85fb420) Stream added, broadcasting: 3
I0819 03:33:23.306331       7 log.go:172] (0x85fb260) Reply frame received for 3
I0819 03:33:23.306475       7 log.go:172] (0x85fb260) (0x944e380) Create stream
I0819 03:33:23.306559       7 log.go:172] (0x85fb260) (0x944e380) Stream added, broadcasting: 5
I0819 03:33:23.308053       7 log.go:172] (0x85fb260) Reply frame received for 5
I0819 03:33:23.364513       7 log.go:172] (0x85fb260) Data frame received for 3
I0819 03:33:23.364867       7 log.go:172] (0x85fb420) (3) Data frame handling
I0819 03:33:23.365046       7 log.go:172] (0x85fb420) (3) Data frame sent
I0819 03:33:23.365187       7 log.go:172] (0x85fb260) Data frame received for 3
I0819 03:33:23.365297       7 log.go:172] (0x85fb260) Data frame received for 5
I0819 03:33:23.365471       7 log.go:172] (0x944e380) (5) Data frame handling
I0819 03:33:23.365567       7 log.go:172] (0x85fb420) (3) Data frame handling
I0819 03:33:23.366411       7 log.go:172] (0x85fb260) Data frame received for 1
I0819 03:33:23.366597       7 log.go:172] (0x85fb340) (1) Data frame handling
I0819 03:33:23.366778       7 log.go:172] (0x85fb340) (1) Data frame sent
I0819 03:33:23.366939       7 log.go:172] (0x85fb260) (0x85fb340) Stream removed, broadcasting: 1
I0819 03:33:23.367095       7 log.go:172] (0x85fb260) Go away received
I0819 03:33:23.367355       7 log.go:172] (0x85fb260) (0x85fb340) Stream removed, broadcasting: 1
I0819 03:33:23.367440       7 log.go:172] (0x85fb260) (0x85fb420) Stream removed, broadcasting: 3
I0819 03:33:23.367516       7 log.go:172] (0x85fb260) (0x944e380) Stream removed, broadcasting: 5
Aug 19 03:33:23.367: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:33:23.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7794" for this suite.
Aug 19 03:33:47.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:33:47.561: INFO: namespace pod-network-test-7794 deletion completed in 24.183525665s

• [SLOW TEST:55.344 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:33:47.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 19 03:33:54.847: INFO: 2 pods remaining
Aug 19 03:33:54.848: INFO: 0 pods has nil DeletionTimestamp
Aug 19 03:33:54.848: INFO: 
Aug 19 03:33:55.418: INFO: 0 pods remaining
Aug 19 03:33:55.419: INFO: 0 pods has nil DeletionTimestamp
Aug 19 03:33:55.419: INFO: 
STEP: Gathering metrics
W0819 03:33:56.716862       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 03:33:56.717: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:33:56.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2367" for this suite.
Aug 19 03:34:05.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:34:05.801: INFO: namespace gc-2367 deletion completed in 9.073132956s

• [SLOW TEST:18.239 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:34:05.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:34:10.271: INFO: Waiting up to 5m0s for pod "client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8" in namespace "pods-5202" to be "success or failure"
Aug 19 03:34:10.480: INFO: Pod "client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8": Phase="Pending", Reason="", readiness=false. Elapsed: 208.223553ms
Aug 19 03:34:12.485: INFO: Pod "client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213910574s
Aug 19 03:34:14.491: INFO: Pod "client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.219133213s
Aug 19 03:34:16.495: INFO: Pod "client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223901294s
STEP: Saw pod success
Aug 19 03:34:16.496: INFO: Pod "client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8" satisfied condition "success or failure"
Aug 19 03:34:16.499: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8 container env3cont: 
STEP: delete the pod
Aug 19 03:34:16.667: INFO: Waiting for pod client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8 to disappear
Aug 19 03:34:16.959: INFO: Pod client-envvars-e4b08b8b-2a86-4d78-868a-919386f590b8 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:34:16.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5202" for this suite.
Aug 19 03:34:59.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:34:59.510: INFO: namespace pods-5202 deletion completed in 42.217263839s

• [SLOW TEST:53.708 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:34:59.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:34:59.720: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f0337fb8-7faa-47e8-830d-37b5c2f859aa", Controller:(*bool)(0x9163a02), BlockOwnerDeletion:(*bool)(0x9163a03)}}
Aug 19 03:34:59.827: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3b6f8e65-e948-4f78-bf14-467953722951", Controller:(*bool)(0x955574a), BlockOwnerDeletion:(*bool)(0x955574b)}}
Aug 19 03:34:59.854: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c1e51597-e259-46d1-a37e-e31200c1430e", Controller:(*bool)(0x92ee65a), BlockOwnerDeletion:(*bool)(0x92ee65b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:35:05.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6683" for this suite.
Aug 19 03:35:11.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:35:11.899: INFO: namespace gc-6683 deletion completed in 6.761145965s

• [SLOW TEST:12.388 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:35:11.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 03:35:11.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4551'
Aug 19 03:35:13.174: INFO: stderr: ""
Aug 19 03:35:13.174: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 19 03:35:13.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4551'
Aug 19 03:35:18.390: INFO: stderr: ""
Aug 19 03:35:18.390: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:35:18.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4551" for this suite.
Aug 19 03:35:24.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:35:24.775: INFO: namespace kubectl-4551 deletion completed in 6.188050308s

• [SLOW TEST:12.876 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:35:24.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 19 03:35:33.509: INFO: Successfully updated pod "pod-update-4e0c284e-206d-4698-a248-f2f567dd1790"
STEP: verifying the updated pod is in kubernetes
Aug 19 03:35:33.712: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:35:33.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6828" for this suite.
Aug 19 03:35:57.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:35:57.827: INFO: namespace pods-6828 deletion completed in 24.108616904s

• [SLOW TEST:33.049 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:35:57.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:35:58.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1" in namespace "projected-5918" to be "success or failure"
Aug 19 03:35:58.287: INFO: Pod "downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.028188ms
Aug 19 03:36:00.293: INFO: Pod "downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046179877s
Aug 19 03:36:02.511: INFO: Pod "downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.264247965s
Aug 19 03:36:04.842: INFO: Pod "downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.595026639s
STEP: Saw pod success
Aug 19 03:36:04.842: INFO: Pod "downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1" satisfied condition "success or failure"
Aug 19 03:36:04.916: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1 container client-container: 
STEP: delete the pod
Aug 19 03:36:05.790: INFO: Waiting for pod downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1 to disappear
Aug 19 03:36:06.478: INFO: Pod downwardapi-volume-cd92fd61-7316-47ad-8e46-e80099c661e1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:36:06.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5918" for this suite.
Aug 19 03:36:12.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:36:12.909: INFO: namespace projected-5918 deletion completed in 6.334342134s

• [SLOW TEST:15.079 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:36:12.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8182
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 19 03:36:13.047: INFO: Found 0 stateful pods, waiting for 3
Aug 19 03:36:23.207: INFO: Found 2 stateful pods, waiting for 3
Aug 19 03:36:33.053: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:36:33.054: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:36:33.054: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 19 03:36:33.085: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 19 03:36:43.598: INFO: Updating stateful set ss2
Aug 19 03:36:43.747: INFO: Waiting for Pod statefulset-8182/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 19 03:36:55.160: INFO: Found 2 stateful pods, waiting for 3
Aug 19 03:37:05.168: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:37:05.168: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 03:37:05.168: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 19 03:37:05.197: INFO: Updating stateful set ss2
Aug 19 03:37:05.250: INFO: Waiting for Pod statefulset-8182/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:37:15.318: INFO: Updating stateful set ss2
Aug 19 03:37:15.344: INFO: Waiting for StatefulSet statefulset-8182/ss2 to complete update
Aug 19 03:37:15.345: INFO: Waiting for Pod statefulset-8182/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 03:37:25.357: INFO: Waiting for StatefulSet statefulset-8182/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 03:37:35.355: INFO: Deleting all statefulset in ns statefulset-8182
Aug 19 03:37:35.358: INFO: Scaling statefulset ss2 to 0
Aug 19 03:38:05.420: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 03:38:05.425: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:38:05.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8182" for this suite.
Aug 19 03:38:13.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:38:13.583: INFO: namespace statefulset-8182 deletion completed in 8.127908349s

• [SLOW TEST:120.670 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:38:13.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 19 03:38:13.657: INFO: Waiting up to 5m0s for pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d" in namespace "emptydir-1900" to be "success or failure"
Aug 19 03:38:13.666: INFO: Pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.957853ms
Aug 19 03:38:15.705: INFO: Pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04704967s
Aug 19 03:38:17.711: INFO: Pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052899921s
Aug 19 03:38:19.717: INFO: Pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059646289s
Aug 19 03:38:21.781: INFO: Pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123007101s
STEP: Saw pod success
Aug 19 03:38:21.781: INFO: Pod "pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d" satisfied condition "success or failure"
Aug 19 03:38:21.920: INFO: Trying to get logs from node iruya-worker pod pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d container test-container: 
STEP: delete the pod
Aug 19 03:38:21.988: INFO: Waiting for pod pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d to disappear
Aug 19 03:38:22.171: INFO: Pod pod-3ad5af80-6fb3-492b-9134-9f5fdbb8af4d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:38:22.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1900" for this suite.
Aug 19 03:38:28.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:38:28.602: INFO: namespace emptydir-1900 deletion completed in 6.419890115s

• [SLOW TEST:15.017 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:38:28.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-977f9ea7-ed8a-499c-ac6e-4f6499518c00
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:38:29.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5040" for this suite.
Aug 19 03:38:36.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:38:36.402: INFO: namespace configmap-5040 deletion completed in 6.781102009s

• [SLOW TEST:7.800 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:38:36.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:38:43.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7645" for this suite.
Aug 19 03:38:51.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:38:52.208: INFO: namespace emptydir-wrapper-7645 deletion completed in 8.986865117s

• [SLOW TEST:15.803 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:38:52.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 19 03:38:52.532: INFO: Waiting up to 5m0s for pod "pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4" in namespace "emptydir-6606" to be "success or failure"
Aug 19 03:38:52.746: INFO: Pod "pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 213.800349ms
Aug 19 03:38:54.786: INFO: Pod "pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254598713s
Aug 19 03:38:56.793: INFO: Pod "pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2606751s
Aug 19 03:38:58.800: INFO: Pod "pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267882851s
STEP: Saw pod success
Aug 19 03:38:58.800: INFO: Pod "pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4" satisfied condition "success or failure"
Aug 19 03:38:58.804: INFO: Trying to get logs from node iruya-worker pod pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4 container test-container: 
STEP: delete the pod
Aug 19 03:38:58.830: INFO: Waiting for pod pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4 to disappear
Aug 19 03:38:58.871: INFO: Pod pod-feb2f373-1d5f-4e1f-9650-62ba2ecb3cb4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:38:58.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6606" for this suite.
Aug 19 03:39:04.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:39:05.063: INFO: namespace emptydir-6606 deletion completed in 6.183397148s

• [SLOW TEST:12.853 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:39:05.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-69mxh in namespace proxy-521
I0819 03:39:05.291190       7 runners.go:180] Created replication controller with name: proxy-service-69mxh, namespace: proxy-521, replica count: 1
I0819 03:39:06.342620       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 03:39:07.343317       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 03:39:08.343976       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 03:39:09.344577       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 03:39:10.345449       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0819 03:39:11.346219       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0819 03:39:12.346925       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0819 03:39:13.347583       7 runners.go:180] proxy-service-69mxh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 19 03:39:13.372: INFO: setup took 8.141973042s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 19 03:39:13.379: INFO: (0) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 6.618759ms)
Aug 19 03:39:13.380: INFO: (0) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 6.932187ms)
Aug 19 03:39:13.380: INFO: (0) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz/proxy/: test (200; 7.15918ms)
Aug 19 03:39:13.380: INFO: (0) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 7.313021ms)
Aug 19 03:39:13.380: INFO: (0) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.586987ms)
Aug 19 03:39:13.380: INFO: (0) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 7.796285ms)
Aug 19 03:39:13.381: INFO: (0) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testt... (200; 7.669775ms)
Aug 19 03:39:13.381: INFO: (0) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 7.870679ms)
Aug 19 03:39:13.381: INFO: (0) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 8.38373ms)
Aug 19 03:39:13.382: INFO: (0) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 8.837166ms)
Aug 19 03:39:13.405: INFO: (0) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 31.914751ms)
Aug 19 03:39:13.405: INFO: (0) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 32.619744ms)
Aug 19 03:39:13.405: INFO: (0) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: test (200; 6.018982ms)
Aug 19 03:39:13.438: INFO: (1) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 6.826648ms)
Aug 19 03:39:13.438: INFO: (1) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 6.941763ms)
Aug 19 03:39:13.439: INFO: (1) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: t... (200; 8.824285ms)
Aug 19 03:39:13.440: INFO: (1) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 9.29931ms)
Aug 19 03:39:13.440: INFO: (1) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtesttest (200; 8.801926ms)
Aug 19 03:39:13.451: INFO: (2) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 8.894672ms)
Aug 19 03:39:13.452: INFO: (2) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 9.59246ms)
Aug 19 03:39:13.452: INFO: (2) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 9.805012ms)
Aug 19 03:39:13.455: INFO: (3) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 3.313729ms)
Aug 19 03:39:13.457: INFO: (3) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 4.58794ms)
Aug 19 03:39:13.457: INFO: (3) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 4.632389ms)
Aug 19 03:39:13.457: INFO: (3) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 4.80674ms)
Aug 19 03:39:13.458: INFO: (3) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 6.362456ms)
Aug 19 03:39:13.458: INFO: (3) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: t... (200; 7.180892ms)
Aug 19 03:39:13.459: INFO: (3) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 7.370945ms)
Aug 19 03:39:13.460: INFO: (3) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 7.458046ms)
Aug 19 03:39:13.460: INFO: (3) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 7.731363ms)
Aug 19 03:39:13.460: INFO: (3) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.918157ms)
Aug 19 03:39:13.460: INFO: (3) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 8.241202ms)
Aug 19 03:39:13.461: INFO: (3) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 9.548806ms)
Aug 19 03:39:13.466: INFO: (4) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: test (200; 8.005773ms)
Aug 19 03:39:13.470: INFO: (4) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 8.045715ms)
Aug 19 03:39:13.470: INFO: (4) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testt... (200; 8.237983ms)
Aug 19 03:39:13.471: INFO: (4) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 9.536717ms)
Aug 19 03:39:13.477: INFO: (5) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 5.108579ms)
Aug 19 03:39:13.478: INFO: (5) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 6.15373ms)
Aug 19 03:39:13.478: INFO: (5) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 6.179163ms)
Aug 19 03:39:13.478: INFO: (5) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 7.030675ms)
Aug 19 03:39:13.479: INFO: (5) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 7.427501ms)
Aug 19 03:39:13.479: INFO: (5) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 7.817035ms)
Aug 19 03:39:13.479: INFO: (5) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 8.586102ms)
Aug 19 03:39:13.481: INFO: (5) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 8.890933ms)
Aug 19 03:39:13.481: INFO: (5) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 9.10459ms)
Aug 19 03:39:13.481: INFO: (5) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 9.189747ms)
Aug 19 03:39:13.481: INFO: (5) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: testtest (200; 7.696095ms)
Aug 19 03:39:13.490: INFO: (6) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 7.87242ms)
Aug 19 03:39:13.490: INFO: (6) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.976518ms)
Aug 19 03:39:13.490: INFO: (6) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 8.055865ms)
Aug 19 03:39:13.490: INFO: (6) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: testt... (200; 8.237814ms)
Aug 19 03:39:13.500: INFO: (7) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 8.510067ms)
Aug 19 03:39:13.500: INFO: (7) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz/proxy/: test (200; 8.852095ms)
Aug 19 03:39:13.500: INFO: (7) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 8.861183ms)
Aug 19 03:39:13.505: INFO: (8) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 3.615889ms)
Aug 19 03:39:13.505: INFO: (8) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz/proxy/: test (200; 4.10768ms)
Aug 19 03:39:13.506: INFO: (8) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 4.969014ms)
Aug 19 03:39:13.507: INFO: (8) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 6.391803ms)
Aug 19 03:39:13.507: INFO: (8) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 6.404411ms)
Aug 19 03:39:13.507: INFO: (8) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 5.671743ms)
Aug 19 03:39:13.507: INFO: (8) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 6.212635ms)
Aug 19 03:39:13.508: INFO: (8) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtestt... (200; 6.7528ms)
Aug 19 03:39:13.517: INFO: (9) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 6.776966ms)
Aug 19 03:39:13.518: INFO: (9) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 6.861352ms)
Aug 19 03:39:13.518: INFO: (9) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.089925ms)
Aug 19 03:39:13.518: INFO: (9) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 7.253951ms)
Aug 19 03:39:13.518: INFO: (9) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 7.412737ms)
Aug 19 03:39:13.518: INFO: (9) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz/proxy/: test (200; 7.561618ms)
Aug 19 03:39:13.519: INFO: (9) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 7.949268ms)
Aug 19 03:39:13.519: INFO: (9) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 7.908115ms)
Aug 19 03:39:13.519: INFO: (9) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 8.619806ms)
Aug 19 03:39:13.520: INFO: (9) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 8.926244ms)
Aug 19 03:39:13.525: INFO: (10) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 4.672726ms)
Aug 19 03:39:13.526: INFO: (10) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 5.531698ms)
Aug 19 03:39:13.527: INFO: (10) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 7.013874ms)
Aug 19 03:39:13.528: INFO: (10) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 7.69139ms)
Aug 19 03:39:13.528: INFO: (10) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 7.86354ms)
Aug 19 03:39:13.528: INFO: (10) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.977493ms)
Aug 19 03:39:13.529: INFO: (10) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 8.311229ms)
Aug 19 03:39:13.529: INFO: (10) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 8.928109ms)
Aug 19 03:39:13.529: INFO: (10) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 8.639241ms)
Aug 19 03:39:13.529: INFO: (10) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: test (200; 8.853425ms)
Aug 19 03:39:13.529: INFO: (10) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 8.862989ms)
Aug 19 03:39:13.529: INFO: (10) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 9.220841ms)
Aug 19 03:39:13.530: INFO: (10) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 9.523404ms)
Aug 19 03:39:13.530: INFO: (10) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 9.845412ms)
Aug 19 03:39:13.530: INFO: (10) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testt... (200; 8.63385ms)
Aug 19 03:39:13.540: INFO: (11) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 8.895906ms)
Aug 19 03:39:13.540: INFO: (11) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 9.236631ms)
Aug 19 03:39:13.541: INFO: (11) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 10.089166ms)
Aug 19 03:39:13.542: INFO: (11) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 10.215979ms)
Aug 19 03:39:13.546: INFO: (12) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 4.385164ms)
Aug 19 03:39:13.546: INFO: (12) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: t... (200; 5.690692ms)
Aug 19 03:39:13.548: INFO: (12) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 5.778633ms)
Aug 19 03:39:13.548: INFO: (12) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 5.934068ms)
Aug 19 03:39:13.549: INFO: (12) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 6.703733ms)
Aug 19 03:39:13.550: INFO: (12) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 8.281504ms)
Aug 19 03:39:13.550: INFO: (12) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 8.300822ms)
Aug 19 03:39:13.550: INFO: (12) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 8.895758ms)
Aug 19 03:39:13.551: INFO: (12) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 9.205814ms)
Aug 19 03:39:13.552: INFO: (12) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 9.799819ms)
Aug 19 03:39:13.556: INFO: (13) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 4.092185ms)
Aug 19 03:39:13.557: INFO: (13) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: testt... (200; 8.631348ms)
Aug 19 03:39:13.561: INFO: (13) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 8.909012ms)
Aug 19 03:39:13.562: INFO: (13) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz/proxy/: test (200; 9.147591ms)
Aug 19 03:39:13.562: INFO: (13) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 9.337548ms)
Aug 19 03:39:13.562: INFO: (13) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 9.857426ms)
Aug 19 03:39:13.562: INFO: (13) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 9.585631ms)
Aug 19 03:39:13.564: INFO: (13) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 11.216468ms)
Aug 19 03:39:13.568: INFO: (14) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: test (200; 10.774645ms)
Aug 19 03:39:13.575: INFO: (14) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 10.712112ms)
Aug 19 03:39:13.576: INFO: (14) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testt... (200; 11.519383ms)
Aug 19 03:39:13.576: INFO: (14) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 11.637468ms)
Aug 19 03:39:13.583: INFO: (15) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 6.463651ms)
Aug 19 03:39:13.583: INFO: (15) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 6.938112ms)
Aug 19 03:39:13.583: INFO: (15) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 6.740635ms)
Aug 19 03:39:13.583: INFO: (15) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 7.418165ms)
Aug 19 03:39:13.584: INFO: (15) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 9.634922ms)
Aug 19 03:39:13.586: INFO: (15) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 9.925731ms)
Aug 19 03:39:13.590: INFO: (16) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 3.583404ms)
Aug 19 03:39:13.591: INFO: (16) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 4.646427ms)
Aug 19 03:39:13.591: INFO: (16) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 4.713051ms)
Aug 19 03:39:13.592: INFO: (16) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 5.680954ms)
Aug 19 03:39:13.593: INFO: (16) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 6.533342ms)
Aug 19 03:39:13.593: INFO: (16) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 7.170852ms)
Aug 19 03:39:13.594: INFO: (16) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 7.256803ms)
Aug 19 03:39:13.594: INFO: (16) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.353013ms)
Aug 19 03:39:13.594: INFO: (16) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:1080/proxy/: t... (200; 7.669602ms)
Aug 19 03:39:13.594: INFO: (16) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 7.88314ms)
Aug 19 03:39:13.595: INFO: (16) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 8.06533ms)
Aug 19 03:39:13.595: INFO: (16) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 8.240141ms)
Aug 19 03:39:13.595: INFO: (16) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 8.637226ms)
Aug 19 03:39:13.599: INFO: (17) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testt... (200; 3.803369ms)
Aug 19 03:39:13.599: INFO: (17) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname2/proxy/: bar (200; 4.0297ms)
Aug 19 03:39:13.600: INFO: (17) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 4.071976ms)
Aug 19 03:39:13.601: INFO: (17) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 5.260523ms)
Aug 19 03:39:13.601: INFO: (17) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 5.091471ms)
Aug 19 03:39:13.601: INFO: (17) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz/proxy/: test (200; 5.729021ms)
Aug 19 03:39:13.601: INFO: (17) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 5.870988ms)
Aug 19 03:39:13.601: INFO: (17) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 6.152134ms)
Aug 19 03:39:13.602: INFO: (17) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: test (200; 5.285591ms)
Aug 19 03:39:13.609: INFO: (18) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 5.603358ms)
Aug 19 03:39:13.609: INFO: (18) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 5.693188ms)
Aug 19 03:39:13.610: INFO: (18) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 6.221512ms)
Aug 19 03:39:13.610: INFO: (18) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:460/proxy/: tls baz (200; 6.300858ms)
Aug 19 03:39:13.610: INFO: (18) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:443/proxy/: t... (200; 6.516275ms)
Aug 19 03:39:13.610: INFO: (18) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 6.673842ms)
Aug 19 03:39:13.610: INFO: (18) /api/v1/namespaces/proxy-521/services/http:proxy-service-69mxh:portname1/proxy/: foo (200; 6.850334ms)
Aug 19 03:39:13.611: INFO: (18) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 6.927952ms)
Aug 19 03:39:13.611: INFO: (18) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 7.16571ms)
Aug 19 03:39:13.611: INFO: (18) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 7.381051ms)
Aug 19 03:39:13.611: INFO: (18) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname1/proxy/: tls baz (200; 7.515857ms)
Aug 19 03:39:13.611: INFO: (18) /api/v1/namespaces/proxy-521/pods/https:proxy-service-69mxh-gd7sz:462/proxy/: tls qux (200; 7.768318ms)
Aug 19 03:39:13.611: INFO: (18) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testt... (200; 6.035172ms)
Aug 19 03:39:13.618: INFO: (19) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname2/proxy/: bar (200; 6.164009ms)
Aug 19 03:39:13.618: INFO: (19) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:162/proxy/: bar (200; 6.085518ms)
Aug 19 03:39:13.618: INFO: (19) /api/v1/namespaces/proxy-521/pods/http:proxy-service-69mxh-gd7sz:160/proxy/: foo (200; 6.12102ms)
Aug 19 03:39:13.618: INFO: (19) /api/v1/namespaces/proxy-521/pods/proxy-service-69mxh-gd7sz:1080/proxy/: testtest (200; 7.69874ms)
Aug 19 03:39:13.620: INFO: (19) /api/v1/namespaces/proxy-521/services/https:proxy-service-69mxh:tlsportname2/proxy/: tls qux (200; 8.437984ms)
Aug 19 03:39:13.621: INFO: (19) /api/v1/namespaces/proxy-521/services/proxy-service-69mxh:portname1/proxy/: foo (200; 8.739515ms)
STEP: deleting ReplicationController proxy-service-69mxh in namespace proxy-521, will wait for the garbage collector to delete the pods
Aug 19 03:39:13.680: INFO: Deleting ReplicationController proxy-service-69mxh took: 6.422313ms
Aug 19 03:39:13.981: INFO: Terminating ReplicationController proxy-service-69mxh pods took: 300.770928ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:39:16.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-521" for this suite.
Aug 19 03:39:24.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:39:25.777: INFO: namespace proxy-521 deletion completed in 9.268941981s

• [SLOW TEST:20.713 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:39:25.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 19 03:39:49.521: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:49.522: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:49.625419       7 log.go:172] (0x9489b90) (0x9489c00) Create stream
I0819 03:39:49.625580       7 log.go:172] (0x9489b90) (0x9489c00) Stream added, broadcasting: 1
I0819 03:39:49.629435       7 log.go:172] (0x9489b90) Reply frame received for 1
I0819 03:39:49.629612       7 log.go:172] (0x9489b90) (0x8bac1c0) Create stream
I0819 03:39:49.629706       7 log.go:172] (0x9489b90) (0x8bac1c0) Stream added, broadcasting: 3
I0819 03:39:49.631161       7 log.go:172] (0x9489b90) Reply frame received for 3
I0819 03:39:49.631335       7 log.go:172] (0x9489b90) (0x85faee0) Create stream
I0819 03:39:49.631426       7 log.go:172] (0x9489b90) (0x85faee0) Stream added, broadcasting: 5
I0819 03:39:49.632927       7 log.go:172] (0x9489b90) Reply frame received for 5
I0819 03:39:49.715669       7 log.go:172] (0x9489b90) Data frame received for 3
I0819 03:39:49.715859       7 log.go:172] (0x8bac1c0) (3) Data frame handling
I0819 03:39:49.715977       7 log.go:172] (0x8bac1c0) (3) Data frame sent
I0819 03:39:49.716098       7 log.go:172] (0x9489b90) Data frame received for 3
I0819 03:39:49.716252       7 log.go:172] (0x9489b90) Data frame received for 5
I0819 03:39:49.716572       7 log.go:172] (0x85faee0) (5) Data frame handling
I0819 03:39:49.716906       7 log.go:172] (0x8bac1c0) (3) Data frame handling
I0819 03:39:49.717684       7 log.go:172] (0x9489b90) Data frame received for 1
I0819 03:39:49.717846       7 log.go:172] (0x9489c00) (1) Data frame handling
I0819 03:39:49.718150       7 log.go:172] (0x9489c00) (1) Data frame sent
I0819 03:39:49.718342       7 log.go:172] (0x9489b90) (0x9489c00) Stream removed, broadcasting: 1
I0819 03:39:49.718550       7 log.go:172] (0x9489b90) Go away received
I0819 03:39:49.719116       7 log.go:172] (0x9489b90) (0x9489c00) Stream removed, broadcasting: 1
I0819 03:39:49.719333       7 log.go:172] (0x9489b90) (0x8bac1c0) Stream removed, broadcasting: 3
I0819 03:39:49.719565       7 log.go:172] (0x9489b90) (0x85faee0) Stream removed, broadcasting: 5
Aug 19 03:39:49.719: INFO: Exec stderr: ""
Aug 19 03:39:49.720: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:49.720: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:49.816971       7 log.go:172] (0x95dc310) (0x95dd110) Create stream
I0819 03:39:49.817136       7 log.go:172] (0x95dc310) (0x95dd110) Stream added, broadcasting: 1
I0819 03:39:49.820716       7 log.go:172] (0x95dc310) Reply frame received for 1
I0819 03:39:49.821063       7 log.go:172] (0x95dc310) (0x93f1490) Create stream
I0819 03:39:49.821172       7 log.go:172] (0x95dc310) (0x93f1490) Stream added, broadcasting: 3
I0819 03:39:49.822535       7 log.go:172] (0x95dc310) Reply frame received for 3
I0819 03:39:49.822672       7 log.go:172] (0x95dc310) (0x8bac540) Create stream
I0819 03:39:49.822755       7 log.go:172] (0x95dc310) (0x8bac540) Stream added, broadcasting: 5
I0819 03:39:49.823856       7 log.go:172] (0x95dc310) Reply frame received for 5
I0819 03:39:49.883381       7 log.go:172] (0x95dc310) Data frame received for 5
I0819 03:39:49.883600       7 log.go:172] (0x8bac540) (5) Data frame handling
I0819 03:39:49.883764       7 log.go:172] (0x95dc310) Data frame received for 3
I0819 03:39:49.884010       7 log.go:172] (0x93f1490) (3) Data frame handling
I0819 03:39:49.884156       7 log.go:172] (0x95dc310) Data frame received for 1
I0819 03:39:49.884335       7 log.go:172] (0x95dd110) (1) Data frame handling
I0819 03:39:49.884498       7 log.go:172] (0x93f1490) (3) Data frame sent
I0819 03:39:49.884849       7 log.go:172] (0x95dc310) Data frame received for 3
I0819 03:39:49.885006       7 log.go:172] (0x93f1490) (3) Data frame handling
I0819 03:39:49.885146       7 log.go:172] (0x95dd110) (1) Data frame sent
I0819 03:39:49.885304       7 log.go:172] (0x95dc310) (0x95dd110) Stream removed, broadcasting: 1
I0819 03:39:49.885469       7 log.go:172] (0x95dc310) Go away received
I0819 03:39:49.885911       7 log.go:172] (0x95dc310) (0x95dd110) Stream removed, broadcasting: 1
I0819 03:39:49.886075       7 log.go:172] (0x95dc310) (0x93f1490) Stream removed, broadcasting: 3
I0819 03:39:49.886226       7 log.go:172] (0x95dc310) (0x8bac540) Stream removed, broadcasting: 5
Aug 19 03:39:49.886: INFO: Exec stderr: ""
Aug 19 03:39:49.886: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:49.887: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:49.992958       7 log.go:172] (0x944f340) (0x944f500) Create stream
I0819 03:39:49.993111       7 log.go:172] (0x944f340) (0x944f500) Stream added, broadcasting: 1
I0819 03:39:49.996591       7 log.go:172] (0x944f340) Reply frame received for 1
I0819 03:39:49.996903       7 log.go:172] (0x944f340) (0x944f6c0) Create stream
I0819 03:39:49.997027       7 log.go:172] (0x944f340) (0x944f6c0) Stream added, broadcasting: 3
I0819 03:39:49.998597       7 log.go:172] (0x944f340) Reply frame received for 3
I0819 03:39:49.998707       7 log.go:172] (0x944f340) (0x944f880) Create stream
I0819 03:39:49.998773       7 log.go:172] (0x944f340) (0x944f880) Stream added, broadcasting: 5
I0819 03:39:50.000321       7 log.go:172] (0x944f340) Reply frame received for 5
I0819 03:39:50.059209       7 log.go:172] (0x944f340) Data frame received for 3
I0819 03:39:50.059420       7 log.go:172] (0x944f6c0) (3) Data frame handling
I0819 03:39:50.059535       7 log.go:172] (0x944f340) Data frame received for 5
I0819 03:39:50.059701       7 log.go:172] (0x944f880) (5) Data frame handling
I0819 03:39:50.060573       7 log.go:172] (0x944f340) Data frame received for 1
I0819 03:39:50.060661       7 log.go:172] (0x944f500) (1) Data frame handling
I0819 03:39:50.060799       7 log.go:172] (0x944f500) (1) Data frame sent
I0819 03:39:50.060882       7 log.go:172] (0x944f340) (0x944f500) Stream removed, broadcasting: 1
I0819 03:39:50.061281       7 log.go:172] (0x944f6c0) (3) Data frame sent
I0819 03:39:50.061391       7 log.go:172] (0x944f340) Data frame received for 3
I0819 03:39:50.061457       7 log.go:172] (0x944f6c0) (3) Data frame handling
I0819 03:39:50.061540       7 log.go:172] (0x944f340) Go away received
I0819 03:39:50.061907       7 log.go:172] (0x944f340) (0x944f500) Stream removed, broadcasting: 1
I0819 03:39:50.061992       7 log.go:172] (0x944f340) (0x944f6c0) Stream removed, broadcasting: 3
I0819 03:39:50.062065       7 log.go:172] (0x944f340) (0x944f880) Stream removed, broadcasting: 5
Aug 19 03:39:50.062: INFO: Exec stderr: ""
Aug 19 03:39:50.062: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:50.062: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:50.150061       7 log.go:172] (0x8badab0) (0x8badb90) Create stream
I0819 03:39:50.150193       7 log.go:172] (0x8badab0) (0x8badb90) Stream added, broadcasting: 1
I0819 03:39:50.153580       7 log.go:172] (0x8badab0) Reply frame received for 1
I0819 03:39:50.153778       7 log.go:172] (0x8badab0) (0x8badc70) Create stream
I0819 03:39:50.153909       7 log.go:172] (0x8badab0) (0x8badc70) Stream added, broadcasting: 3
I0819 03:39:50.155494       7 log.go:172] (0x8badab0) Reply frame received for 3
I0819 03:39:50.155619       7 log.go:172] (0x8badab0) (0x9489c70) Create stream
I0819 03:39:50.155691       7 log.go:172] (0x8badab0) (0x9489c70) Stream added, broadcasting: 5
I0819 03:39:50.156969       7 log.go:172] (0x8badab0) Reply frame received for 5
I0819 03:39:50.235313       7 log.go:172] (0x8badab0) Data frame received for 3
I0819 03:39:50.235508       7 log.go:172] (0x8badc70) (3) Data frame handling
I0819 03:39:50.235658       7 log.go:172] (0x8badab0) Data frame received for 5
I0819 03:39:50.235935       7 log.go:172] (0x9489c70) (5) Data frame handling
I0819 03:39:50.236085       7 log.go:172] (0x8badc70) (3) Data frame sent
I0819 03:39:50.236231       7 log.go:172] (0x8badab0) Data frame received for 3
I0819 03:39:50.236363       7 log.go:172] (0x8badc70) (3) Data frame handling
I0819 03:39:50.236531       7 log.go:172] (0x8badab0) Data frame received for 1
I0819 03:39:50.236704       7 log.go:172] (0x8badb90) (1) Data frame handling
I0819 03:39:50.236985       7 log.go:172] (0x8badb90) (1) Data frame sent
I0819 03:39:50.237195       7 log.go:172] (0x8badab0) (0x8badb90) Stream removed, broadcasting: 1
I0819 03:39:50.237381       7 log.go:172] (0x8badab0) Go away received
I0819 03:39:50.237960       7 log.go:172] (0x8badab0) (0x8badb90) Stream removed, broadcasting: 1
I0819 03:39:50.238134       7 log.go:172] (0x8badab0) (0x8badc70) Stream removed, broadcasting: 3
I0819 03:39:50.238318       7 log.go:172] (0x8badab0) (0x9489c70) Stream removed, broadcasting: 5
Aug 19 03:39:50.238: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 19 03:39:50.238: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:50.239: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:50.329366       7 log.go:172] (0x8d642a0) (0x8d64380) Create stream
I0819 03:39:50.329616       7 log.go:172] (0x8d642a0) (0x8d64380) Stream added, broadcasting: 1
I0819 03:39:50.337540       7 log.go:172] (0x8d642a0) Reply frame received for 1
I0819 03:39:50.337836       7 log.go:172] (0x8d642a0) (0x83f0c40) Create stream
I0819 03:39:50.337954       7 log.go:172] (0x8d642a0) (0x83f0c40) Stream added, broadcasting: 3
I0819 03:39:50.339428       7 log.go:172] (0x8d642a0) Reply frame received for 3
I0819 03:39:50.339538       7 log.go:172] (0x8d642a0) (0x8d64460) Create stream
I0819 03:39:50.339596       7 log.go:172] (0x8d642a0) (0x8d64460) Stream added, broadcasting: 5
I0819 03:39:50.340685       7 log.go:172] (0x8d642a0) Reply frame received for 5
I0819 03:39:50.383444       7 log.go:172] (0x8d642a0) Data frame received for 3
I0819 03:39:50.383649       7 log.go:172] (0x83f0c40) (3) Data frame handling
I0819 03:39:50.383805       7 log.go:172] (0x8d642a0) Data frame received for 5
I0819 03:39:50.384033       7 log.go:172] (0x8d64460) (5) Data frame handling
I0819 03:39:50.384361       7 log.go:172] (0x83f0c40) (3) Data frame sent
I0819 03:39:50.384585       7 log.go:172] (0x8d642a0) Data frame received for 3
I0819 03:39:50.384773       7 log.go:172] (0x83f0c40) (3) Data frame handling
I0819 03:39:50.384936       7 log.go:172] (0x8d642a0) Data frame received for 1
I0819 03:39:50.385103       7 log.go:172] (0x8d64380) (1) Data frame handling
I0819 03:39:50.385240       7 log.go:172] (0x8d64380) (1) Data frame sent
I0819 03:39:50.385406       7 log.go:172] (0x8d642a0) (0x8d64380) Stream removed, broadcasting: 1
I0819 03:39:50.385568       7 log.go:172] (0x8d642a0) Go away received
I0819 03:39:50.386004       7 log.go:172] (0x8d642a0) (0x8d64380) Stream removed, broadcasting: 1
I0819 03:39:50.386160       7 log.go:172] (0x8d642a0) (0x83f0c40) Stream removed, broadcasting: 3
I0819 03:39:50.386276       7 log.go:172] (0x8d642a0) (0x8d64460) Stream removed, broadcasting: 5
Aug 19 03:39:50.386: INFO: Exec stderr: ""
Aug 19 03:39:50.386: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:50.386: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:50.484331       7 log.go:172] (0x92400e0) (0x92401c0) Create stream
I0819 03:39:50.484478       7 log.go:172] (0x92400e0) (0x92401c0) Stream added, broadcasting: 1
I0819 03:39:50.491511       7 log.go:172] (0x92400e0) Reply frame received for 1
I0819 03:39:50.491760       7 log.go:172] (0x92400e0) (0x9258000) Create stream
I0819 03:39:50.491889       7 log.go:172] (0x92400e0) (0x9258000) Stream added, broadcasting: 3
I0819 03:39:50.494568       7 log.go:172] (0x92400e0) Reply frame received for 3
I0819 03:39:50.494738       7 log.go:172] (0x92400e0) (0x92580e0) Create stream
I0819 03:39:50.494892       7 log.go:172] (0x92400e0) (0x92580e0) Stream added, broadcasting: 5
I0819 03:39:50.496507       7 log.go:172] (0x92400e0) Reply frame received for 5
I0819 03:39:50.549067       7 log.go:172] (0x92400e0) Data frame received for 5
I0819 03:39:50.549223       7 log.go:172] (0x92580e0) (5) Data frame handling
I0819 03:39:50.549371       7 log.go:172] (0x92400e0) Data frame received for 3
I0819 03:39:50.549465       7 log.go:172] (0x9258000) (3) Data frame handling
I0819 03:39:50.549597       7 log.go:172] (0x9258000) (3) Data frame sent
I0819 03:39:50.549714       7 log.go:172] (0x92400e0) Data frame received for 3
I0819 03:39:50.549825       7 log.go:172] (0x9258000) (3) Data frame handling
I0819 03:39:50.550486       7 log.go:172] (0x92400e0) Data frame received for 1
I0819 03:39:50.550675       7 log.go:172] (0x92401c0) (1) Data frame handling
I0819 03:39:50.550856       7 log.go:172] (0x92401c0) (1) Data frame sent
I0819 03:39:50.551058       7 log.go:172] (0x92400e0) (0x92401c0) Stream removed, broadcasting: 1
I0819 03:39:50.551322       7 log.go:172] (0x92400e0) Go away received
I0819 03:39:50.551890       7 log.go:172] (0x92400e0) (0x92401c0) Stream removed, broadcasting: 1
I0819 03:39:50.552096       7 log.go:172] (0x92400e0) (0x9258000) Stream removed, broadcasting: 3
I0819 03:39:50.552196       7 log.go:172] (0x92400e0) (0x92580e0) Stream removed, broadcasting: 5
Aug 19 03:39:50.552: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 19 03:39:50.552: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:50.552: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:50.668941       7 log.go:172] (0x7f560e0) (0x7f56150) Create stream
I0819 03:39:50.669096       7 log.go:172] (0x7f560e0) (0x7f56150) Stream added, broadcasting: 1
I0819 03:39:50.675665       7 log.go:172] (0x7f560e0) Reply frame received for 1
I0819 03:39:50.675818       7 log.go:172] (0x7f560e0) (0x9489ce0) Create stream
I0819 03:39:50.675888       7 log.go:172] (0x7f560e0) (0x9489ce0) Stream added, broadcasting: 3
I0819 03:39:50.677281       7 log.go:172] (0x7f560e0) Reply frame received for 3
I0819 03:39:50.677385       7 log.go:172] (0x7f560e0) (0x7f561c0) Create stream
I0819 03:39:50.677459       7 log.go:172] (0x7f560e0) (0x7f561c0) Stream added, broadcasting: 5
I0819 03:39:50.678632       7 log.go:172] (0x7f560e0) Reply frame received for 5
I0819 03:39:50.742346       7 log.go:172] (0x7f560e0) Data frame received for 5
I0819 03:39:50.742516       7 log.go:172] (0x7f561c0) (5) Data frame handling
I0819 03:39:50.742655       7 log.go:172] (0x7f560e0) Data frame received for 3
I0819 03:39:50.742785       7 log.go:172] (0x9489ce0) (3) Data frame handling
I0819 03:39:50.742921       7 log.go:172] (0x9489ce0) (3) Data frame sent
I0819 03:39:50.743035       7 log.go:172] (0x7f560e0) Data frame received for 3
I0819 03:39:50.743138       7 log.go:172] (0x9489ce0) (3) Data frame handling
I0819 03:39:50.743801       7 log.go:172] (0x7f560e0) Data frame received for 1
I0819 03:39:50.743916       7 log.go:172] (0x7f56150) (1) Data frame handling
I0819 03:39:50.744022       7 log.go:172] (0x7f56150) (1) Data frame sent
I0819 03:39:50.744135       7 log.go:172] (0x7f560e0) (0x7f56150) Stream removed, broadcasting: 1
I0819 03:39:50.744270       7 log.go:172] (0x7f560e0) Go away received
I0819 03:39:50.744638       7 log.go:172] (0x7f560e0) (0x7f56150) Stream removed, broadcasting: 1
I0819 03:39:50.744879       7 log.go:172] (0x7f560e0) (0x9489ce0) Stream removed, broadcasting: 3
I0819 03:39:50.744987       7 log.go:172] (0x7f560e0) (0x7f561c0) Stream removed, broadcasting: 5
Aug 19 03:39:50.745: INFO: Exec stderr: ""
Aug 19 03:39:50.745: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:50.745: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:50.840122       7 log.go:172] (0x92584d0) (0x9258540) Create stream
I0819 03:39:50.840240       7 log.go:172] (0x92584d0) (0x9258540) Stream added, broadcasting: 1
I0819 03:39:50.843749       7 log.go:172] (0x92584d0) Reply frame received for 1
I0819 03:39:50.844000       7 log.go:172] (0x92584d0) (0x92585b0) Create stream
I0819 03:39:50.844148       7 log.go:172] (0x92584d0) (0x92585b0) Stream added, broadcasting: 3
I0819 03:39:50.846402       7 log.go:172] (0x92584d0) Reply frame received for 3
I0819 03:39:50.846590       7 log.go:172] (0x92584d0) (0x9489d50) Create stream
I0819 03:39:50.846698       7 log.go:172] (0x92584d0) (0x9489d50) Stream added, broadcasting: 5
I0819 03:39:50.848621       7 log.go:172] (0x92584d0) Reply frame received for 5
I0819 03:39:50.933326       7 log.go:172] (0x92584d0) Data frame received for 5
I0819 03:39:50.933536       7 log.go:172] (0x9489d50) (5) Data frame handling
I0819 03:39:50.933667       7 log.go:172] (0x92584d0) Data frame received for 3
I0819 03:39:50.933775       7 log.go:172] (0x92585b0) (3) Data frame handling
I0819 03:39:50.933911       7 log.go:172] (0x92585b0) (3) Data frame sent
I0819 03:39:50.934013       7 log.go:172] (0x92584d0) Data frame received for 3
I0819 03:39:50.934105       7 log.go:172] (0x92585b0) (3) Data frame handling
I0819 03:39:50.934664       7 log.go:172] (0x92584d0) Data frame received for 1
I0819 03:39:50.934799       7 log.go:172] (0x9258540) (1) Data frame handling
I0819 03:39:50.934927       7 log.go:172] (0x9258540) (1) Data frame sent
I0819 03:39:50.935049       7 log.go:172] (0x92584d0) (0x9258540) Stream removed, broadcasting: 1
I0819 03:39:50.935187       7 log.go:172] (0x92584d0) Go away received
I0819 03:39:50.935645       7 log.go:172] (0x92584d0) (0x9258540) Stream removed, broadcasting: 1
I0819 03:39:50.935807       7 log.go:172] (0x92584d0) (0x92585b0) Stream removed, broadcasting: 3
I0819 03:39:50.935936       7 log.go:172] (0x92584d0) (0x9489d50) Stream removed, broadcasting: 5
Aug 19 03:39:50.936: INFO: Exec stderr: ""
Aug 19 03:39:50.936: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:50.936: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:51.022338       7 log.go:172] (0x83d4000) (0x83d4070) Create stream
I0819 03:39:51.022489       7 log.go:172] (0x83d4000) (0x83d4070) Stream added, broadcasting: 1
I0819 03:39:51.026593       7 log.go:172] (0x83d4000) Reply frame received for 1
I0819 03:39:51.026777       7 log.go:172] (0x83d4000) (0x83d40e0) Create stream
I0819 03:39:51.026884       7 log.go:172] (0x83d4000) (0x83d40e0) Stream added, broadcasting: 3
I0819 03:39:51.028347       7 log.go:172] (0x83d4000) Reply frame received for 3
I0819 03:39:51.028572       7 log.go:172] (0x83d4000) (0x8d64540) Create stream
I0819 03:39:51.028670       7 log.go:172] (0x83d4000) (0x8d64540) Stream added, broadcasting: 5
I0819 03:39:51.030295       7 log.go:172] (0x83d4000) Reply frame received for 5
I0819 03:39:51.087393       7 log.go:172] (0x83d4000) Data frame received for 5
I0819 03:39:51.087554       7 log.go:172] (0x8d64540) (5) Data frame handling
I0819 03:39:51.087624       7 log.go:172] (0x83d4000) Data frame received for 3
I0819 03:39:51.087702       7 log.go:172] (0x83d40e0) (3) Data frame handling
I0819 03:39:51.087784       7 log.go:172] (0x83d40e0) (3) Data frame sent
I0819 03:39:51.087866       7 log.go:172] (0x83d4000) Data frame received for 3
I0819 03:39:51.088005       7 log.go:172] (0x83d40e0) (3) Data frame handling
I0819 03:39:51.088243       7 log.go:172] (0x83d4000) Data frame received for 1
I0819 03:39:51.088347       7 log.go:172] (0x83d4070) (1) Data frame handling
I0819 03:39:51.088474       7 log.go:172] (0x83d4070) (1) Data frame sent
I0819 03:39:51.088575       7 log.go:172] (0x83d4000) (0x83d4070) Stream removed, broadcasting: 1
I0819 03:39:51.088682       7 log.go:172] (0x83d4000) Go away received
I0819 03:39:51.089101       7 log.go:172] (0x83d4000) (0x83d4070) Stream removed, broadcasting: 1
I0819 03:39:51.089241       7 log.go:172] (0x83d4000) (0x83d40e0) Stream removed, broadcasting: 3
I0819 03:39:51.089321       7 log.go:172] (0x83d4000) (0x8d64540) Stream removed, broadcasting: 5
Aug 19 03:39:51.089: INFO: Exec stderr: ""
Aug 19 03:39:51.089: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4323 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 03:39:51.089: INFO: >>> kubeConfig: /root/.kube/config
I0819 03:39:51.190902       7 log.go:172] (0x8d64b60) (0x8d64c40) Create stream
I0819 03:39:51.191066       7 log.go:172] (0x8d64b60) (0x8d64c40) Stream added, broadcasting: 1
I0819 03:39:51.194513       7 log.go:172] (0x8d64b60) Reply frame received for 1
I0819 03:39:51.194651       7 log.go:172] (0x8d64b60) (0x9489dc0) Create stream
I0819 03:39:51.194726       7 log.go:172] (0x8d64b60) (0x9489dc0) Stream added, broadcasting: 3
I0819 03:39:51.195993       7 log.go:172] (0x8d64b60) Reply frame received for 3
I0819 03:39:51.196103       7 log.go:172] (0x8d64b60) (0x9489e30) Create stream
I0819 03:39:51.196167       7 log.go:172] (0x8d64b60) (0x9489e30) Stream added, broadcasting: 5
I0819 03:39:51.197358       7 log.go:172] (0x8d64b60) Reply frame received for 5
I0819 03:39:51.261669       7 log.go:172] (0x8d64b60) Data frame received for 3
I0819 03:39:51.261854       7 log.go:172] (0x9489dc0) (3) Data frame handling
I0819 03:39:51.261984       7 log.go:172] (0x8d64b60) Data frame received for 5
I0819 03:39:51.262162       7 log.go:172] (0x9489e30) (5) Data frame handling
I0819 03:39:51.262416       7 log.go:172] (0x9489dc0) (3) Data frame sent
I0819 03:39:51.262524       7 log.go:172] (0x8d64b60) Data frame received for 1
I0819 03:39:51.262621       7 log.go:172] (0x8d64c40) (1) Data frame handling
I0819 03:39:51.262746       7 log.go:172] (0x8d64c40) (1) Data frame sent
I0819 03:39:51.262873       7 log.go:172] (0x8d64b60) (0x8d64c40) Stream removed, broadcasting: 1
I0819 03:39:51.263096       7 log.go:172] (0x8d64b60) Data frame received for 3
I0819 03:39:51.263316       7 log.go:172] (0x9489dc0) (3) Data frame handling
I0819 03:39:51.263457       7 log.go:172] (0x8d64b60) Go away received
I0819 03:39:51.263611       7 log.go:172] (0x8d64b60) (0x8d64c40) Stream removed, broadcasting: 1
I0819 03:39:51.263763       7 log.go:172] (0x8d64b60) (0x9489dc0) Stream removed, broadcasting: 3
I0819 03:39:51.263972       7 log.go:172] (0x8d64b60) (0x9489e30) Stream removed, broadcasting: 5
Aug 19 03:39:51.264: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:39:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4323" for this suite.
Aug 19 03:40:37.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:40:37.425: INFO: namespace e2e-kubelet-etc-hosts-4323 deletion completed in 46.134958306s

• [SLOW TEST:71.646 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:40:37.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-z49z
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 03:40:37.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-z49z" in namespace "subpath-6749" to be "success or failure"
Aug 19 03:40:37.700: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Pending", Reason="", readiness=false. Elapsed: 65.590937ms
Aug 19 03:40:39.742: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107404711s
Aug 19 03:40:41.748: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 4.112881023s
Aug 19 03:40:43.753: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 6.117984207s
Aug 19 03:40:45.760: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 8.124613727s
Aug 19 03:40:47.767: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 10.132024092s
Aug 19 03:40:49.773: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 12.137758844s
Aug 19 03:40:51.779: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 14.14368763s
Aug 19 03:40:53.784: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 16.148732073s
Aug 19 03:40:55.790: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 18.155034895s
Aug 19 03:40:57.795: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 20.160395158s
Aug 19 03:40:59.801: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 22.166141461s
Aug 19 03:41:01.806: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Running", Reason="", readiness=true. Elapsed: 24.171025083s
Aug 19 03:41:03.812: INFO: Pod "pod-subpath-test-projected-z49z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.17716544s
STEP: Saw pod success
Aug 19 03:41:03.812: INFO: Pod "pod-subpath-test-projected-z49z" satisfied condition "success or failure"
Aug 19 03:41:03.856: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-z49z container test-container-subpath-projected-z49z: 
STEP: delete the pod
Aug 19 03:41:03.930: INFO: Waiting for pod pod-subpath-test-projected-z49z to disappear
Aug 19 03:41:04.005: INFO: Pod pod-subpath-test-projected-z49z no longer exists
STEP: Deleting pod pod-subpath-test-projected-z49z
Aug 19 03:41:04.005: INFO: Deleting pod "pod-subpath-test-projected-z49z" in namespace "subpath-6749"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:41:04.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6749" for this suite.
Aug 19 03:41:12.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:41:12.842: INFO: namespace subpath-6749 deletion completed in 8.82598469s

• [SLOW TEST:35.414 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:41:12.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-eb327d78-c44b-4d9b-a7cd-c36b2c9df7cd in namespace container-probe-215
Aug 19 03:41:19.527: INFO: Started pod busybox-eb327d78-c44b-4d9b-a7cd-c36b2c9df7cd in namespace container-probe-215
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 03:41:19.530: INFO: Initial restart count of pod busybox-eb327d78-c44b-4d9b-a7cd-c36b2c9df7cd is 0
Aug 19 03:42:16.497: INFO: Restart count of pod container-probe-215/busybox-eb327d78-c44b-4d9b-a7cd-c36b2c9df7cd is now 1 (56.966940445s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:42:16.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-215" for this suite.
Aug 19 03:42:22.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:42:22.755: INFO: namespace container-probe-215 deletion completed in 6.132832716s

• [SLOW TEST:69.912 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:42:22.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 03:42:22.823: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:42:32.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2322" for this suite.
Aug 19 03:42:56.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:42:56.441: INFO: namespace init-container-2322 deletion completed in 24.14043378s

• [SLOW TEST:33.686 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:42:56.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Aug 19 03:42:56.538: INFO: Waiting up to 5m0s for pod "var-expansion-898608c3-d897-477f-af93-3be421ace0df" in namespace "var-expansion-5926" to be "success or failure"
Aug 19 03:42:56.624: INFO: Pod "var-expansion-898608c3-d897-477f-af93-3be421ace0df": Phase="Pending", Reason="", readiness=false. Elapsed: 85.838256ms
Aug 19 03:42:58.630: INFO: Pod "var-expansion-898608c3-d897-477f-af93-3be421ace0df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091562899s
Aug 19 03:43:00.635: INFO: Pod "var-expansion-898608c3-d897-477f-af93-3be421ace0df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096783315s
Aug 19 03:43:02.640: INFO: Pod "var-expansion-898608c3-d897-477f-af93-3be421ace0df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101513941s
STEP: Saw pod success
Aug 19 03:43:02.640: INFO: Pod "var-expansion-898608c3-d897-477f-af93-3be421ace0df" satisfied condition "success or failure"
Aug 19 03:43:02.643: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-898608c3-d897-477f-af93-3be421ace0df container dapi-container: 
STEP: delete the pod
Aug 19 03:43:02.701: INFO: Waiting for pod var-expansion-898608c3-d897-477f-af93-3be421ace0df to disappear
Aug 19 03:43:02.712: INFO: Pod var-expansion-898608c3-d897-477f-af93-3be421ace0df no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:43:02.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5926" for this suite.
Aug 19 03:43:08.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:43:08.887: INFO: namespace var-expansion-5926 deletion completed in 6.169267337s

• [SLOW TEST:12.445 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:43:08.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 19 03:43:08.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172'
Aug 19 03:43:13.191: INFO: stderr: ""
Aug 19 03:43:13.191: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 03:43:13.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3172'
Aug 19 03:43:14.370: INFO: stderr: ""
Aug 19 03:43:14.370: INFO: stdout: "update-demo-nautilus-fmg68 update-demo-nautilus-nsk6p "
Aug 19 03:43:14.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fmg68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3172'
Aug 19 03:43:15.445: INFO: stderr: ""
Aug 19 03:43:15.445: INFO: stdout: ""
Aug 19 03:43:15.445: INFO: update-demo-nautilus-fmg68 is created but not running
Aug 19 03:43:20.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3172'
Aug 19 03:43:21.647: INFO: stderr: ""
Aug 19 03:43:21.647: INFO: stdout: "update-demo-nautilus-fmg68 update-demo-nautilus-nsk6p "
Aug 19 03:43:21.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fmg68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3172'
Aug 19 03:43:22.754: INFO: stderr: ""
Aug 19 03:43:22.754: INFO: stdout: "true"
Aug 19 03:43:22.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fmg68 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3172'
Aug 19 03:43:23.841: INFO: stderr: ""
Aug 19 03:43:23.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:43:23.841: INFO: validating pod update-demo-nautilus-fmg68
Aug 19 03:43:23.847: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:43:23.847: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:43:23.847: INFO: update-demo-nautilus-fmg68 is verified up and running
Aug 19 03:43:23.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nsk6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3172'
Aug 19 03:43:24.997: INFO: stderr: ""
Aug 19 03:43:24.997: INFO: stdout: "true"
Aug 19 03:43:24.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nsk6p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3172'
Aug 19 03:43:26.115: INFO: stderr: ""
Aug 19 03:43:26.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:43:26.115: INFO: validating pod update-demo-nautilus-nsk6p
Aug 19 03:43:26.120: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:43:26.120: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:43:26.120: INFO: update-demo-nautilus-nsk6p is verified up and running
STEP: using delete to clean up resources
Aug 19 03:43:26.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172'
Aug 19 03:43:27.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:43:27.328: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 19 03:43:27.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3172'
Aug 19 03:43:28.900: INFO: stderr: "No resources found.\n"
Aug 19 03:43:28.900: INFO: stdout: ""
Aug 19 03:43:28.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3172 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 03:43:30.529: INFO: stderr: ""
Aug 19 03:43:30.529: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:43:30.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3172" for this suite.
Aug 19 03:43:39.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:43:39.186: INFO: namespace kubectl-3172 deletion completed in 8.647308538s

• [SLOW TEST:30.298 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:43:39.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:43:39.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531" in namespace "projected-3639" to be "success or failure"
Aug 19 03:43:39.278: INFO: Pod "downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816894ms
Aug 19 03:43:41.282: INFO: Pod "downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013171061s
Aug 19 03:43:43.288: INFO: Pod "downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019017648s
STEP: Saw pod success
Aug 19 03:43:43.288: INFO: Pod "downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531" satisfied condition "success or failure"
Aug 19 03:43:43.293: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531 container client-container: 
STEP: delete the pod
Aug 19 03:43:43.313: INFO: Waiting for pod downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531 to disappear
Aug 19 03:43:43.316: INFO: Pod downwardapi-volume-beb7ec00-52f2-4344-96d9-838ebec30531 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:43:43.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3639" for this suite.
Aug 19 03:43:49.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:43:49.478: INFO: namespace projected-3639 deletion completed in 6.151999008s

• [SLOW TEST:10.290 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:43:49.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:43:49.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448" in namespace "projected-4139" to be "success or failure"
Aug 19 03:43:49.598: INFO: Pod "downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448": Phase="Pending", Reason="", readiness=false. Elapsed: 23.066431ms
Aug 19 03:43:51.603: INFO: Pod "downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028073318s
Aug 19 03:43:53.609: INFO: Pod "downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034256981s
STEP: Saw pod success
Aug 19 03:43:53.609: INFO: Pod "downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448" satisfied condition "success or failure"
Aug 19 03:43:53.614: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448 container client-container: 
STEP: delete the pod
Aug 19 03:43:53.630: INFO: Waiting for pod downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448 to disappear
Aug 19 03:43:53.634: INFO: Pod downwardapi-volume-50c72175-d7e0-4bae-b33f-a345794f8448 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:43:53.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4139" for this suite.
Aug 19 03:43:59.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:43:59.769: INFO: namespace projected-4139 deletion completed in 6.128744973s

• [SLOW TEST:10.289 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:43:59.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 19 03:43:59.840: INFO: Waiting up to 5m0s for pod "pod-681768cb-b37c-4f03-b501-70ee7e00f12f" in namespace "emptydir-5080" to be "success or failure"
Aug 19 03:43:59.846: INFO: Pod "pod-681768cb-b37c-4f03-b501-70ee7e00f12f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.498757ms
Aug 19 03:44:01.859: INFO: Pod "pod-681768cb-b37c-4f03-b501-70ee7e00f12f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018531101s
Aug 19 03:44:03.866: INFO: Pod "pod-681768cb-b37c-4f03-b501-70ee7e00f12f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025409093s
STEP: Saw pod success
Aug 19 03:44:03.866: INFO: Pod "pod-681768cb-b37c-4f03-b501-70ee7e00f12f" satisfied condition "success or failure"
Aug 19 03:44:03.871: INFO: Trying to get logs from node iruya-worker2 pod pod-681768cb-b37c-4f03-b501-70ee7e00f12f container test-container: 
STEP: delete the pod
Aug 19 03:44:03.907: INFO: Waiting for pod pod-681768cb-b37c-4f03-b501-70ee7e00f12f to disappear
Aug 19 03:44:03.918: INFO: Pod pod-681768cb-b37c-4f03-b501-70ee7e00f12f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:44:03.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5080" for this suite.
Aug 19 03:44:09.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:44:10.055: INFO: namespace emptydir-5080 deletion completed in 6.12435242s

• [SLOW TEST:10.284 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:44:10.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 19 03:44:10.411: INFO: Waiting up to 5m0s for pod "pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25" in namespace "emptydir-9438" to be "success or failure"
Aug 19 03:44:10.423: INFO: Pod "pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25": Phase="Pending", Reason="", readiness=false. Elapsed: 11.421009ms
Aug 19 03:44:12.428: INFO: Pod "pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016323609s
Aug 19 03:44:14.481: INFO: Pod "pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069994708s
STEP: Saw pod success
Aug 19 03:44:14.481: INFO: Pod "pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25" satisfied condition "success or failure"
Aug 19 03:44:14.485: INFO: Trying to get logs from node iruya-worker2 pod pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25 container test-container: 
STEP: delete the pod
Aug 19 03:44:14.519: INFO: Waiting for pod pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25 to disappear
Aug 19 03:44:14.530: INFO: Pod pod-9a2e6bad-2101-43fe-b91d-ce8439eb9f25 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:44:14.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9438" for this suite.
Aug 19 03:44:20.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:44:20.655: INFO: namespace emptydir-9438 deletion completed in 6.11545908s

• [SLOW TEST:10.599 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:44:20.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 03:44:20.775: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:44:26.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5222" for this suite.
Aug 19 03:44:32.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:44:32.924: INFO: namespace init-container-5222 deletion completed in 6.135049667s

• [SLOW TEST:12.268 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:44:32.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 19 03:44:33.113: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9582,SelfLink:/api/v1/namespaces/watch-9582/configmaps/e2e-watch-test-label-changed,UID:1509a7e0-e0fc-4963-88c4-44b20ec95968,ResourceVersion:977952,Generation:0,CreationTimestamp:2020-08-19 03:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 03:44:33.114: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9582,SelfLink:/api/v1/namespaces/watch-9582/configmaps/e2e-watch-test-label-changed,UID:1509a7e0-e0fc-4963-88c4-44b20ec95968,ResourceVersion:977953,Generation:0,CreationTimestamp:2020-08-19 03:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 19 03:44:33.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9582,SelfLink:/api/v1/namespaces/watch-9582/configmaps/e2e-watch-test-label-changed,UID:1509a7e0-e0fc-4963-88c4-44b20ec95968,ResourceVersion:977954,Generation:0,CreationTimestamp:2020-08-19 03:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 19 03:44:43.152: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9582,SelfLink:/api/v1/namespaces/watch-9582/configmaps/e2e-watch-test-label-changed,UID:1509a7e0-e0fc-4963-88c4-44b20ec95968,ResourceVersion:977976,Generation:0,CreationTimestamp:2020-08-19 03:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 03:44:43.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9582,SelfLink:/api/v1/namespaces/watch-9582/configmaps/e2e-watch-test-label-changed,UID:1509a7e0-e0fc-4963-88c4-44b20ec95968,ResourceVersion:977977,Generation:0,CreationTimestamp:2020-08-19 03:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 19 03:44:43.153: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9582,SelfLink:/api/v1/namespaces/watch-9582/configmaps/e2e-watch-test-label-changed,UID:1509a7e0-e0fc-4963-88c4-44b20ec95968,ResourceVersion:977978,Generation:0,CreationTimestamp:2020-08-19 03:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:44:43.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9582" for this suite.
Aug 19 03:44:49.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:44:49.363: INFO: namespace watch-9582 deletion completed in 6.202238867s

• [SLOW TEST:16.436 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:44:49.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-58f53d24-4ea8-47d6-a0e6-c2d8736feb02
STEP: Creating a pod to test consume configMaps
Aug 19 03:44:49.485: INFO: Waiting up to 5m0s for pod "pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7" in namespace "configmap-8653" to be "success or failure"
Aug 19 03:44:49.506: INFO: Pod "pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.791839ms
Aug 19 03:44:51.836: INFO: Pod "pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350220185s
Aug 19 03:44:53.841: INFO: Pod "pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7": Phase="Running", Reason="", readiness=true. Elapsed: 4.355857525s
Aug 19 03:44:55.846: INFO: Pod "pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.360812272s
STEP: Saw pod success
Aug 19 03:44:55.846: INFO: Pod "pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7" satisfied condition "success or failure"
Aug 19 03:44:55.849: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7 container configmap-volume-test: 
STEP: delete the pod
Aug 19 03:44:55.919: INFO: Waiting for pod pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7 to disappear
Aug 19 03:44:55.926: INFO: Pod pod-configmaps-c72903f4-866b-40a6-87b9-d2f4022a5bf7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:44:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8653" for this suite.
Aug 19 03:45:02.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:45:02.818: INFO: namespace configmap-8653 deletion completed in 6.884855816s

• [SLOW TEST:13.455 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:45:02.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 03:45:07.327: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:45:07.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9484" for this suite.
Aug 19 03:45:13.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:45:13.713: INFO: namespace container-runtime-9484 deletion completed in 6.291349431s

• [SLOW TEST:10.891 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:45:13.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6871
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6871 to expose endpoints map[]
Aug 19 03:45:13.830: INFO: successfully validated that service multi-endpoint-test in namespace services-6871 exposes endpoints map[] (16.316544ms elapsed)
STEP: Creating pod pod1 in namespace services-6871
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6871 to expose endpoints map[pod1:[100]]
Aug 19 03:45:18.812: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.972613822s elapsed, will retry)
Aug 19 03:45:19.825: INFO: successfully validated that service multi-endpoint-test in namespace services-6871 exposes endpoints map[pod1:[100]] (5.985805951s elapsed)
STEP: Creating pod pod2 in namespace services-6871
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6871 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 19 03:45:25.079: INFO: successfully validated that service multi-endpoint-test in namespace services-6871 exposes endpoints map[pod1:[100] pod2:[101]] (5.248482525s elapsed)
STEP: Deleting pod pod1 in namespace services-6871
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6871 to expose endpoints map[pod2:[101]]
Aug 19 03:45:25.141: INFO: successfully validated that service multi-endpoint-test in namespace services-6871 exposes endpoints map[pod2:[101]] (54.197071ms elapsed)
STEP: Deleting pod pod2 in namespace services-6871
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6871 to expose endpoints map[]
Aug 19 03:45:25.225: INFO: successfully validated that service multi-endpoint-test in namespace services-6871 exposes endpoints map[] (78.56381ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:45:25.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6871" for this suite.
Aug 19 03:45:34.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:45:34.335: INFO: namespace services-6871 deletion completed in 8.788854055s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:20.620 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:45:34.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 03:45:34.971: INFO: Waiting up to 5m0s for pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545" in namespace "downward-api-4632" to be "success or failure"
Aug 19 03:45:35.256: INFO: Pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545": Phase="Pending", Reason="", readiness=false. Elapsed: 284.309102ms
Aug 19 03:45:37.383: INFO: Pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411200417s
Aug 19 03:45:39.502: INFO: Pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531000668s
Aug 19 03:45:41.849: INFO: Pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877298144s
Aug 19 03:45:43.856: INFO: Pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.884258953s
STEP: Saw pod success
Aug 19 03:45:43.856: INFO: Pod "downward-api-46555b66-2d3f-4a2a-988a-308b97214545" satisfied condition "success or failure"
Aug 19 03:45:43.861: INFO: Trying to get logs from node iruya-worker2 pod downward-api-46555b66-2d3f-4a2a-988a-308b97214545 container dapi-container: 
STEP: delete the pod
Aug 19 03:45:44.175: INFO: Waiting for pod downward-api-46555b66-2d3f-4a2a-988a-308b97214545 to disappear
Aug 19 03:45:44.735: INFO: Pod downward-api-46555b66-2d3f-4a2a-988a-308b97214545 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:45:44.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4632" for this suite.
Aug 19 03:45:51.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:45:51.750: INFO: namespace downward-api-4632 deletion completed in 6.753172331s

• [SLOW TEST:17.415 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:45:51.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:45:52.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e" in namespace "projected-4761" to be "success or failure"
Aug 19 03:45:53.026: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.513812ms
Aug 19 03:45:55.239: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239861026s
Aug 19 03:45:57.245: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245343997s
Aug 19 03:45:59.251: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251891872s
Aug 19 03:46:01.258: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e": Phase="Running", Reason="", readiness=true. Elapsed: 8.258896101s
Aug 19 03:46:03.281: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.281097475s
STEP: Saw pod success
Aug 19 03:46:03.281: INFO: Pod "downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e" satisfied condition "success or failure"
Aug 19 03:46:03.290: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e container client-container: 
STEP: delete the pod
Aug 19 03:46:03.448: INFO: Waiting for pod downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e to disappear
Aug 19 03:46:03.753: INFO: Pod downwardapi-volume-25fcc37b-02f0-4b6f-be54-dc5f5753de0e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:46:03.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4761" for this suite.
Aug 19 03:46:10.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:46:10.791: INFO: namespace projected-4761 deletion completed in 6.940214261s

• [SLOW TEST:19.039 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:46:10.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:46:11.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02" in namespace "projected-4326" to be "success or failure"
Aug 19 03:46:11.533: INFO: Pod "downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02": Phase="Pending", Reason="", readiness=false. Elapsed: 94.067457ms
Aug 19 03:46:13.540: INFO: Pod "downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101593821s
Aug 19 03:46:15.545: INFO: Pod "downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106622638s
Aug 19 03:46:17.551: INFO: Pod "downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112679158s
STEP: Saw pod success
Aug 19 03:46:17.552: INFO: Pod "downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02" satisfied condition "success or failure"
Aug 19 03:46:17.556: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02 container client-container: 
STEP: delete the pod
Aug 19 03:46:17.731: INFO: Waiting for pod downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02 to disappear
Aug 19 03:46:17.772: INFO: Pod downwardapi-volume-732559c1-7e04-475f-b9ac-64842c06eb02 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:46:17.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4326" for this suite.
Aug 19 03:46:23.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:46:24.039: INFO: namespace projected-4326 deletion completed in 6.257540478s

• [SLOW TEST:13.243 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:46:24.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:46:24.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a" in namespace "downward-api-1169" to be "success or failure"
Aug 19 03:46:24.343: INFO: Pod "downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.199849ms
Aug 19 03:46:26.350: INFO: Pod "downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031189063s
Aug 19 03:46:28.356: INFO: Pod "downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037191913s
Aug 19 03:46:30.364: INFO: Pod "downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044955727s
STEP: Saw pod success
Aug 19 03:46:30.364: INFO: Pod "downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a" satisfied condition "success or failure"
Aug 19 03:46:30.370: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a container client-container: 
STEP: delete the pod
Aug 19 03:46:30.545: INFO: Waiting for pod downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a to disappear
Aug 19 03:46:30.590: INFO: Pod downwardapi-volume-7a9e7e3c-f1f7-48da-a824-3b4f9e422b0a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:46:30.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1169" for this suite.
Aug 19 03:46:36.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:46:36.822: INFO: namespace downward-api-1169 deletion completed in 6.220519434s

• [SLOW TEST:12.780 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:46:36.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 19 03:46:36.967: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:36.990: INFO: Number of nodes with available pods: 0
Aug 19 03:46:36.990: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:38.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:38.008: INFO: Number of nodes with available pods: 0
Aug 19 03:46:38.008: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:39.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:39.330: INFO: Number of nodes with available pods: 0
Aug 19 03:46:39.330: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:40.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:40.007: INFO: Number of nodes with available pods: 0
Aug 19 03:46:40.007: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:41.002: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:41.010: INFO: Number of nodes with available pods: 0
Aug 19 03:46:41.010: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:42.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:42.008: INFO: Number of nodes with available pods: 2
Aug 19 03:46:42.008: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 19 03:46:42.067: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:42.074: INFO: Number of nodes with available pods: 1
Aug 19 03:46:42.074: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:43.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:43.157: INFO: Number of nodes with available pods: 1
Aug 19 03:46:43.157: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:44.087: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:44.094: INFO: Number of nodes with available pods: 1
Aug 19 03:46:44.094: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:45.087: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:45.094: INFO: Number of nodes with available pods: 1
Aug 19 03:46:45.094: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:46.087: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:46.095: INFO: Number of nodes with available pods: 1
Aug 19 03:46:46.096: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:47.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:47.093: INFO: Number of nodes with available pods: 1
Aug 19 03:46:47.093: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:48.088: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:48.094: INFO: Number of nodes with available pods: 1
Aug 19 03:46:48.094: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:49.088: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:49.094: INFO: Number of nodes with available pods: 1
Aug 19 03:46:49.095: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:50.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:50.091: INFO: Number of nodes with available pods: 1
Aug 19 03:46:50.091: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:51.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:51.093: INFO: Number of nodes with available pods: 1
Aug 19 03:46:51.093: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:52.084: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:52.089: INFO: Number of nodes with available pods: 1
Aug 19 03:46:52.089: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:53.085: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:53.091: INFO: Number of nodes with available pods: 1
Aug 19 03:46:53.091: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:54.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:54.092: INFO: Number of nodes with available pods: 1
Aug 19 03:46:54.092: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:55.087: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:55.093: INFO: Number of nodes with available pods: 1
Aug 19 03:46:55.093: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:56.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:56.095: INFO: Number of nodes with available pods: 1
Aug 19 03:46:56.095: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:57.085: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:57.091: INFO: Number of nodes with available pods: 1
Aug 19 03:46:57.091: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:46:58.088: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:46:58.093: INFO: Number of nodes with available pods: 2
Aug 19 03:46:58.093: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3162, will wait for the garbage collector to delete the pods
Aug 19 03:46:58.161: INFO: Deleting DaemonSet.extensions daemon-set took: 8.916222ms
Aug 19 03:46:58.462: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.940057ms
Aug 19 03:47:13.670: INFO: Number of nodes with available pods: 0
Aug 19 03:47:13.670: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 03:47:13.675: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3162/daemonsets","resourceVersion":"978521"},"items":null}

Aug 19 03:47:13.678: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3162/pods","resourceVersion":"978521"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:47:13.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3162" for this suite.
Aug 19 03:47:21.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:47:21.892: INFO: namespace daemonsets-3162 deletion completed in 8.188350993s

• [SLOW TEST:45.067 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:47:21.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:47:30.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-358" for this suite.
Aug 19 03:48:14.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:48:14.669: INFO: namespace kubelet-test-358 deletion completed in 44.170383796s

• [SLOW TEST:52.770 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:48:14.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-7c1750c9-3198-4cc3-8b53-3ec9536810fa
STEP: Creating a pod to test consume configMaps
Aug 19 03:48:14.781: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325" in namespace "projected-1021" to be "success or failure"
Aug 19 03:48:14.798: INFO: Pod "pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325": Phase="Pending", Reason="", readiness=false. Elapsed: 16.957683ms
Aug 19 03:48:16.805: INFO: Pod "pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023675066s
Aug 19 03:48:18.812: INFO: Pod "pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031402639s
Aug 19 03:48:20.820: INFO: Pod "pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038525202s
STEP: Saw pod success
Aug 19 03:48:20.820: INFO: Pod "pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325" satisfied condition "success or failure"
Aug 19 03:48:20.825: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 03:48:20.876: INFO: Waiting for pod pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325 to disappear
Aug 19 03:48:20.880: INFO: Pod pod-projected-configmaps-a57dce21-60be-4c5c-ba1d-5da33bea5325 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:48:20.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1021" for this suite.
Aug 19 03:48:26.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:48:27.094: INFO: namespace projected-1021 deletion completed in 6.206644526s

• [SLOW TEST:12.424 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:48:27.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1793
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1793
STEP: Creating statefulset with conflicting port in namespace statefulset-1793
STEP: Waiting until pod test-pod will start running in namespace statefulset-1793
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1793
Aug 19 03:48:33.687: INFO: Observed stateful pod in namespace: statefulset-1793, name: ss-0, uid: 47868419-c68b-4bad-828f-f835adc93eec, status phase: Pending. Waiting for statefulset controller to delete.
Aug 19 03:48:33.770: INFO: Observed stateful pod in namespace: statefulset-1793, name: ss-0, uid: 47868419-c68b-4bad-828f-f835adc93eec, status phase: Failed. Waiting for statefulset controller to delete.
Aug 19 03:48:33.779: INFO: Observed stateful pod in namespace: statefulset-1793, name: ss-0, uid: 47868419-c68b-4bad-828f-f835adc93eec, status phase: Failed. Waiting for statefulset controller to delete.
Aug 19 03:48:33.828: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1793
STEP: Removing pod with conflicting port in namespace statefulset-1793
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1793 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 03:48:39.938: INFO: Deleting all statefulset in ns statefulset-1793
Aug 19 03:48:39.944: INFO: Scaling statefulset ss to 0
Aug 19 03:48:49.982: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 03:48:49.987: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:48:50.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1793" for this suite.
Aug 19 03:48:56.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:48:56.502: INFO: namespace statefulset-1793 deletion completed in 6.458079451s

• [SLOW TEST:29.405 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:48:56.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-52ae4eaa-7d65-4e69-968c-023bf2a1a224
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-52ae4eaa-7d65-4e69-968c-023bf2a1a224
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:50:26.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5574" for this suite.
Aug 19 03:50:53.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:50:53.446: INFO: namespace configmap-5574 deletion completed in 26.712136447s

• [SLOW TEST:116.942 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:50:53.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-3403d6b6-d035-4ac9-9a1a-14227ecb35e0
STEP: Creating a pod to test consume secrets
Aug 19 03:50:54.640: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531" in namespace "projected-3643" to be "success or failure"
Aug 19 03:50:54.734: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Pending", Reason="", readiness=false. Elapsed: 92.982919ms
Aug 19 03:50:56.986: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344597733s
Aug 19 03:50:58.990: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3495698s
Aug 19 03:51:01.147: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Pending", Reason="", readiness=false. Elapsed: 6.505628005s
Aug 19 03:51:03.154: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513038482s
Aug 19 03:51:05.398: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Running", Reason="", readiness=true. Elapsed: 10.757334356s
Aug 19 03:51:07.404: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.762855108s
STEP: Saw pod success
Aug 19 03:51:07.404: INFO: Pod "pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531" satisfied condition "success or failure"
Aug 19 03:51:07.494: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531 container secret-volume-test: 
STEP: delete the pod
Aug 19 03:51:07.854: INFO: Waiting for pod pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531 to disappear
Aug 19 03:51:07.991: INFO: Pod pod-projected-secrets-51606008-1de5-4a37-96fb-3a384cea8531 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:51:07.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3643" for this suite.
Aug 19 03:51:14.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:51:14.554: INFO: namespace projected-3643 deletion completed in 6.483643274s

• [SLOW TEST:21.106 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:51:14.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:51:14.823: INFO: Create a RollingUpdate DaemonSet
Aug 19 03:51:14.828: INFO: Check that daemon pods launch on every node of the cluster
Aug 19 03:51:14.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:14.845: INFO: Number of nodes with available pods: 0
Aug 19 03:51:14.845: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:15.859: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:15.865: INFO: Number of nodes with available pods: 0
Aug 19 03:51:15.865: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:19.356: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:20.778: INFO: Number of nodes with available pods: 0
Aug 19 03:51:20.778: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:20.865: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:21.112: INFO: Number of nodes with available pods: 0
Aug 19 03:51:21.112: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:21.858: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:21.864: INFO: Number of nodes with available pods: 0
Aug 19 03:51:21.864: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:22.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:22.972: INFO: Number of nodes with available pods: 0
Aug 19 03:51:22.973: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:23.979: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:23.987: INFO: Number of nodes with available pods: 0
Aug 19 03:51:23.987: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:24.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:25.166: INFO: Number of nodes with available pods: 0
Aug 19 03:51:25.166: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:25.980: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:25.986: INFO: Number of nodes with available pods: 0
Aug 19 03:51:25.986: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:51:26.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:26.861: INFO: Number of nodes with available pods: 1
Aug 19 03:51:26.861: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 03:51:27.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:27.885: INFO: Number of nodes with available pods: 2
Aug 19 03:51:27.885: INFO: Number of running nodes: 2, number of available pods: 2
Aug 19 03:51:27.885: INFO: Update the DaemonSet to trigger a rollout
Aug 19 03:51:27.898: INFO: Updating DaemonSet daemon-set
Aug 19 03:51:46.012: INFO: Roll back the DaemonSet before rollout is complete
Aug 19 03:51:46.046: INFO: Updating DaemonSet daemon-set
Aug 19 03:51:46.046: INFO: Make sure DaemonSet rollback is complete
Aug 19 03:51:46.078: INFO: Wrong image for pod: daemon-set-h6hdj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 03:51:46.078: INFO: Pod daemon-set-h6hdj is not available
Aug 19 03:51:46.379: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:47.388: INFO: Wrong image for pod: daemon-set-h6hdj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 03:51:47.389: INFO: Pod daemon-set-h6hdj is not available
Aug 19 03:51:47.397: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:51:48.442: INFO: Pod daemon-set-jmn2c is not available
Aug 19 03:51:48.521: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8302, will wait for the garbage collector to delete the pods
Aug 19 03:51:48.663: INFO: Deleting DaemonSet.extensions daemon-set took: 10.373824ms
Aug 19 03:51:48.964: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.982365ms
Aug 19 03:51:54.071: INFO: Number of nodes with available pods: 0
Aug 19 03:51:54.071: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 03:51:54.482: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8302/daemonsets","resourceVersion":"979403"},"items":null}

Aug 19 03:51:54.486: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8302/pods","resourceVersion":"979403"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:51:54.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8302" for this suite.
Aug 19 03:52:04.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:52:04.829: INFO: namespace daemonsets-8302 deletion completed in 10.316162586s

• [SLOW TEST:50.275 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:52:04.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 03:52:11.496: INFO: Successfully updated pod "labelsupdate9213acd0-9b46-4d8f-8741-0af78bda354a"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:52:13.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5356" for this suite.
Aug 19 03:52:37.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:52:37.822: INFO: namespace projected-5356 deletion completed in 24.273241146s

• [SLOW TEST:32.992 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:52:37.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-96c76af4-3351-4505-b06c-e45428a5fb4a
STEP: Creating a pod to test consume configMaps
Aug 19 03:52:38.989: INFO: Waiting up to 5m0s for pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20" in namespace "configmap-6514" to be "success or failure"
Aug 19 03:52:39.003: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20": Phase="Pending", Reason="", readiness=false. Elapsed: 13.669575ms
Aug 19 03:52:41.185: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195674443s
Aug 19 03:52:43.192: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203046293s
Aug 19 03:52:45.199: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209807783s
Aug 19 03:52:47.388: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398691015s
Aug 19 03:52:49.814: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.824786016s
STEP: Saw pod success
Aug 19 03:52:49.814: INFO: Pod "pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20" satisfied condition "success or failure"
Aug 19 03:52:49.820: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20 container configmap-volume-test: 
STEP: delete the pod
Aug 19 03:52:50.294: INFO: Waiting for pod pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20 to disappear
Aug 19 03:52:51.633: INFO: Pod pod-configmaps-30f32eb9-3042-47c1-8501-cfa062e4ee20 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:52:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6514" for this suite.
Aug 19 03:52:59.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:53:00.083: INFO: namespace configmap-6514 deletion completed in 8.377483275s

• [SLOW TEST:22.256 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:53:00.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug 19 03:53:00.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6138'
Aug 19 03:53:01.862: INFO: stderr: ""
Aug 19 03:53:01.862: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 03:53:01.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6138'
Aug 19 03:53:03.038: INFO: stderr: ""
Aug 19 03:53:03.038: INFO: stdout: "update-demo-nautilus-b7nb5 update-demo-nautilus-ptdgc "
Aug 19 03:53:03.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7nb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:04.552: INFO: stderr: ""
Aug 19 03:53:04.552: INFO: stdout: ""
Aug 19 03:53:04.552: INFO: update-demo-nautilus-b7nb5 is created but not running
Aug 19 03:53:09.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6138'
Aug 19 03:53:10.718: INFO: stderr: ""
Aug 19 03:53:10.718: INFO: stdout: "update-demo-nautilus-b7nb5 update-demo-nautilus-ptdgc "
Aug 19 03:53:10.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7nb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:14.637: INFO: stderr: ""
Aug 19 03:53:14.637: INFO: stdout: "true"
Aug 19 03:53:14.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7nb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:15.811: INFO: stderr: ""
Aug 19 03:53:15.811: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:53:15.811: INFO: validating pod update-demo-nautilus-b7nb5
Aug 19 03:53:15.817: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:53:15.817: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:53:15.817: INFO: update-demo-nautilus-b7nb5 is verified up and running
Aug 19 03:53:15.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ptdgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:16.944: INFO: stderr: ""
Aug 19 03:53:16.944: INFO: stdout: "true"
Aug 19 03:53:16.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ptdgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:18.090: INFO: stderr: ""
Aug 19 03:53:18.090: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 03:53:18.090: INFO: validating pod update-demo-nautilus-ptdgc
Aug 19 03:53:18.095: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 03:53:18.095: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 03:53:18.095: INFO: update-demo-nautilus-ptdgc is verified up and running
STEP: rolling-update to new replication controller
Aug 19 03:53:18.102: INFO: scanned /root for discovery docs: 
Aug 19 03:53:18.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6138'
Aug 19 03:53:44.971: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 19 03:53:44.971: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 03:53:44.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6138'
Aug 19 03:53:46.124: INFO: stderr: ""
Aug 19 03:53:46.124: INFO: stdout: "update-demo-kitten-7cd2w update-demo-kitten-d22cv "
Aug 19 03:53:46.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7cd2w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:47.271: INFO: stderr: ""
Aug 19 03:53:47.272: INFO: stdout: "true"
Aug 19 03:53:47.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7cd2w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:48.420: INFO: stderr: ""
Aug 19 03:53:48.420: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 19 03:53:48.420: INFO: validating pod update-demo-kitten-7cd2w
Aug 19 03:53:48.425: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 19 03:53:48.425: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 19 03:53:48.425: INFO: update-demo-kitten-7cd2w is verified up and running
Aug 19 03:53:48.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d22cv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:49.659: INFO: stderr: ""
Aug 19 03:53:49.659: INFO: stdout: "true"
Aug 19 03:53:49.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d22cv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6138'
Aug 19 03:53:50.790: INFO: stderr: ""
Aug 19 03:53:50.790: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 19 03:53:50.791: INFO: validating pod update-demo-kitten-d22cv
Aug 19 03:53:50.806: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 19 03:53:50.806: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 19 03:53:50.806: INFO: update-demo-kitten-d22cv is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:53:50.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6138" for this suite.
Aug 19 03:54:14.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:54:14.958: INFO: namespace kubectl-6138 deletion completed in 24.143244343s

• [SLOW TEST:74.873 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:54:14.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 19 03:54:15.042: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 19 03:54:20.049: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:54:20.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-441" for this suite.
Aug 19 03:54:26.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:54:26.472: INFO: namespace replication-controller-441 deletion completed in 6.344170719s

• [SLOW TEST:11.513 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:54:26.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 03:54:26.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a" in namespace "downward-api-8686" to be "success or failure"
Aug 19 03:54:26.610: INFO: Pod "downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.297877ms
Aug 19 03:54:28.618: INFO: Pod "downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060097866s
Aug 19 03:54:30.625: INFO: Pod "downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067102949s
STEP: Saw pod success
Aug 19 03:54:30.625: INFO: Pod "downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a" satisfied condition "success or failure"
Aug 19 03:54:30.630: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a container client-container: 
STEP: delete the pod
Aug 19 03:54:30.793: INFO: Waiting for pod downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a to disappear
Aug 19 03:54:30.838: INFO: Pod downwardapi-volume-5d6474b7-509f-4b7e-9954-f605bd3c936a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:54:30.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8686" for this suite.
Aug 19 03:54:37.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:54:37.369: INFO: namespace downward-api-8686 deletion completed in 6.520739273s

• [SLOW TEST:10.896 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:54:37.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-92711abe-781f-4912-827c-4ecff32e0411
STEP: Creating secret with name secret-projected-all-test-volume-7c75c31c-c58b-4c11-b037-4188b5cd542b
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 19 03:54:37.913: INFO: Waiting up to 5m0s for pod "projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42" in namespace "projected-3771" to be "success or failure"
Aug 19 03:54:38.055: INFO: Pod "projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 142.30203ms
Aug 19 03:54:40.064: INFO: Pod "projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150418681s
Aug 19 03:54:42.071: INFO: Pod "projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158259304s
STEP: Saw pod success
Aug 19 03:54:42.072: INFO: Pod "projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42" satisfied condition "success or failure"
Aug 19 03:54:42.557: INFO: Trying to get logs from node iruya-worker pod projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42 container projected-all-volume-test: 
STEP: delete the pod
Aug 19 03:54:42.750: INFO: Waiting for pod projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42 to disappear
Aug 19 03:54:42.915: INFO: Pod projected-volume-dc3ace67-03ab-420e-8d43-f87a480d8f42 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:54:42.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3771" for this suite.
Aug 19 03:54:48.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:54:49.144: INFO: namespace projected-3771 deletion completed in 6.200997735s

• [SLOW TEST:11.773 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:54:49.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 03:54:49.261: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 19 03:54:49.279: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:49.291: INFO: Number of nodes with available pods: 0
Aug 19 03:54:49.291: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:54:50.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:50.313: INFO: Number of nodes with available pods: 0
Aug 19 03:54:50.313: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:54:51.343: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:51.350: INFO: Number of nodes with available pods: 0
Aug 19 03:54:51.350: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:54:52.300: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:52.304: INFO: Number of nodes with available pods: 0
Aug 19 03:54:52.304: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 03:54:53.380: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:53.386: INFO: Number of nodes with available pods: 1
Aug 19 03:54:53.386: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 03:54:54.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:54.311: INFO: Number of nodes with available pods: 2
Aug 19 03:54:54.311: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 19 03:54:54.377: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:54.377: INFO: Wrong image for pod: daemon-set-mpxcq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:54.413: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:55.456: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:55.456: INFO: Wrong image for pod: daemon-set-mpxcq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:55.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:56.423: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:56.423: INFO: Wrong image for pod: daemon-set-mpxcq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:56.434: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:57.511: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:57.511: INFO: Wrong image for pod: daemon-set-mpxcq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:57.511: INFO: Pod daemon-set-mpxcq is not available
Aug 19 03:54:57.523: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:58.428: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:58.428: INFO: Pod daemon-set-w99s6 is not available
Aug 19 03:54:58.438: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:54:59.421: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:54:59.421: INFO: Pod daemon-set-w99s6 is not available
Aug 19 03:54:59.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:00.422: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:00.423: INFO: Pod daemon-set-w99s6 is not available
Aug 19 03:55:00.435: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:01.423: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:01.423: INFO: Pod daemon-set-w99s6 is not available
Aug 19 03:55:01.434: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:02.481: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:02.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:03.614: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:03.651: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:04.422: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:04.422: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:04.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:05.486: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:05.487: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:05.495: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:06.422: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:06.422: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:06.433: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:07.423: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:07.423: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:07.449: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:08.423: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:08.424: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:08.433: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:09.421: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:09.421: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:09.430: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:10.423: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:10.423: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:10.432: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:11.421: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:11.421: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:11.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:12.774: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:12.775: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:12.786: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:13.421: INFO: Wrong image for pod: daemon-set-9sxm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 03:55:13.421: INFO: Pod daemon-set-9sxm7 is not available
Aug 19 03:55:13.430: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:14.420: INFO: Pod daemon-set-x54qr is not available
Aug 19 03:55:14.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 19 03:55:14.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:14.440: INFO: Number of nodes with available pods: 1
Aug 19 03:55:14.440: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 03:55:15.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:15.503: INFO: Number of nodes with available pods: 1
Aug 19 03:55:15.503: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 03:55:16.454: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:16.462: INFO: Number of nodes with available pods: 1
Aug 19 03:55:16.462: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 03:55:17.450: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 03:55:17.466: INFO: Number of nodes with available pods: 2
Aug 19 03:55:17.466: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6031, will wait for the garbage collector to delete the pods
Aug 19 03:55:17.571: INFO: Deleting DaemonSet.extensions daemon-set took: 8.31194ms
Aug 19 03:55:17.872: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.797367ms
Aug 19 03:55:33.725: INFO: Number of nodes with available pods: 0
Aug 19 03:55:33.725: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 03:55:33.730: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6031/daemonsets","resourceVersion":"980207"},"items":null}

Aug 19 03:55:33.733: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6031/pods","resourceVersion":"980207"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:55:33.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6031" for this suite.
Aug 19 03:55:39.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:55:40.015: INFO: namespace daemonsets-6031 deletion completed in 6.258457986s

• [SLOW TEST:50.868 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:55:40.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug 19 03:55:40.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7385'
Aug 19 03:55:41.805: INFO: stderr: ""
Aug 19 03:55:41.805: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug 19 03:55:42.923: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:42.924: INFO: Found 0 / 1
Aug 19 03:55:43.941: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:43.942: INFO: Found 0 / 1
Aug 19 03:55:44.813: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:44.813: INFO: Found 0 / 1
Aug 19 03:55:45.889: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:45.889: INFO: Found 0 / 1
Aug 19 03:55:46.871: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:46.871: INFO: Found 0 / 1
Aug 19 03:55:47.856: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:47.856: INFO: Found 0 / 1
Aug 19 03:55:48.813: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:48.813: INFO: Found 1 / 1
Aug 19 03:55:48.814: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 19 03:55:48.819: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 03:55:48.819: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 19 03:55:48.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2czxd redis-master --namespace=kubectl-7385'
Aug 19 03:55:50.012: INFO: stderr: ""
Aug 19 03:55:50.012: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Aug 03:55:48.043 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Aug 03:55:48.043 # Server started, Redis version 3.2.12\n1:M 19 Aug 03:55:48.043 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Aug 03:55:48.043 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 19 03:55:50.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2czxd redis-master --namespace=kubectl-7385 --tail=1'
Aug 19 03:55:51.215: INFO: stderr: ""
Aug 19 03:55:51.215: INFO: stdout: "1:M 19 Aug 03:55:48.043 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 19 03:55:51.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2czxd redis-master --namespace=kubectl-7385 --limit-bytes=1'
Aug 19 03:55:52.414: INFO: stderr: ""
Aug 19 03:55:52.414: INFO: stdout: " "
STEP: exposing timestamps
Aug 19 03:55:52.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2czxd redis-master --namespace=kubectl-7385 --tail=1 --timestamps'
Aug 19 03:55:53.524: INFO: stderr: ""
Aug 19 03:55:53.524: INFO: stdout: "2020-08-19T03:55:48.284444411Z 1:M 19 Aug 03:55:48.043 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 19 03:55:56.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2czxd redis-master --namespace=kubectl-7385 --since=1s'
Aug 19 03:55:57.247: INFO: stderr: ""
Aug 19 03:55:57.247: INFO: stdout: ""
Aug 19 03:55:57.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2czxd redis-master --namespace=kubectl-7385 --since=24h'
Aug 19 03:55:58.405: INFO: stderr: ""
Aug 19 03:55:58.405: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Aug 03:55:48.043 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Aug 03:55:48.043 # Server started, Redis version 3.2.12\n1:M 19 Aug 03:55:48.043 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Aug 03:55:48.043 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug 19 03:55:58.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7385'
Aug 19 03:55:59.574: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 03:55:59.575: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 19 03:55:59.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7385'
Aug 19 03:56:00.734: INFO: stderr: "No resources found.\n"
Aug 19 03:56:00.734: INFO: stdout: ""
Aug 19 03:56:00.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7385 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 03:56:01.877: INFO: stderr: ""
Aug 19 03:56:01.877: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:56:01.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7385" for this suite.
Aug 19 03:56:24.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:56:24.871: INFO: namespace kubectl-7385 deletion completed in 22.985353225s

• [SLOW TEST:44.855 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:56:24.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2fa0aed3-bfb9-42c7-8caf-15abc500723e
STEP: Creating a pod to test consume configMaps
Aug 19 03:56:25.658: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0" in namespace "configmap-7203" to be "success or failure"
Aug 19 03:56:25.721: INFO: Pod "pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 62.42253ms
Aug 19 03:56:27.727: INFO: Pod "pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068703744s
Aug 19 03:56:29.733: INFO: Pod "pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074065309s
Aug 19 03:56:31.739: INFO: Pod "pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080660132s
STEP: Saw pod success
Aug 19 03:56:31.740: INFO: Pod "pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0" satisfied condition "success or failure"
Aug 19 03:56:31.746: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0 container configmap-volume-test: 
STEP: delete the pod
Aug 19 03:56:31.805: INFO: Waiting for pod pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0 to disappear
Aug 19 03:56:31.810: INFO: Pod pod-configmaps-1fd33581-66ec-4ff9-8ecc-f244e4728ae0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:56:31.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7203" for this suite.
Aug 19 03:56:37.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:56:37.980: INFO: namespace configmap-7203 deletion completed in 6.160835723s

• [SLOW TEST:13.108 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:56:37.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-6ba193b0-97ab-4d66-8557-b5b7a1255364
STEP: Creating a pod to test consume secrets
Aug 19 03:56:38.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd" in namespace "projected-1866" to be "success or failure"
Aug 19 03:56:38.174: INFO: Pod "pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.409551ms
Aug 19 03:56:40.278: INFO: Pod "pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150427115s
Aug 19 03:56:42.314: INFO: Pod "pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185809536s
STEP: Saw pod success
Aug 19 03:56:42.314: INFO: Pod "pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd" satisfied condition "success or failure"
Aug 19 03:56:42.319: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 03:56:42.344: INFO: Waiting for pod pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd to disappear
Aug 19 03:56:42.365: INFO: Pod pod-projected-secrets-bb58a80c-09c3-4df5-90d9-bb3bd93156dd no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:56:42.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1866" for this suite.
Aug 19 03:56:48.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:56:48.553: INFO: namespace projected-1866 deletion completed in 6.177952017s

• [SLOW TEST:10.571 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:56:48.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3825
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3825 to expose endpoints map[]
Aug 19 03:56:48.764: INFO: successfully validated that service endpoint-test2 in namespace services-3825 exposes endpoints map[] (13.192511ms elapsed)
STEP: Creating pod pod1 in namespace services-3825
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3825 to expose endpoints map[pod1:[80]]
Aug 19 03:56:53.511: INFO: successfully validated that service endpoint-test2 in namespace services-3825 exposes endpoints map[pod1:[80]] (4.67421172s elapsed)
STEP: Creating pod pod2 in namespace services-3825
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3825 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 19 03:56:58.187: INFO: successfully validated that service endpoint-test2 in namespace services-3825 exposes endpoints map[pod1:[80] pod2:[80]] (4.668822772s elapsed)
STEP: Deleting pod pod1 in namespace services-3825
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3825 to expose endpoints map[pod2:[80]]
Aug 19 03:56:58.383: INFO: successfully validated that service endpoint-test2 in namespace services-3825 exposes endpoints map[pod2:[80]] (189.135051ms elapsed)
STEP: Deleting pod pod2 in namespace services-3825
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3825 to expose endpoints map[]
Aug 19 03:56:58.428: INFO: successfully validated that service endpoint-test2 in namespace services-3825 exposes endpoints map[] (39.206976ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:56:58.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3825" for this suite.
Aug 19 03:57:21.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:57:21.233: INFO: namespace services-3825 deletion completed in 22.364939082s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:32.679 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:57:21.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 03:57:21.442: INFO: Waiting up to 5m0s for pod "downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b" in namespace "downward-api-4835" to be "success or failure"
Aug 19 03:57:21.451: INFO: Pod "downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278938ms
Aug 19 03:57:23.458: INFO: Pod "downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015730246s
Aug 19 03:57:25.465: INFO: Pod "downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023032569s
Aug 19 03:57:27.473: INFO: Pod "downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030282919s
STEP: Saw pod success
Aug 19 03:57:27.473: INFO: Pod "downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b" satisfied condition "success or failure"
Aug 19 03:57:27.478: INFO: Trying to get logs from node iruya-worker2 pod downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b container dapi-container: 
STEP: delete the pod
Aug 19 03:57:27.548: INFO: Waiting for pod downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b to disappear
Aug 19 03:57:27.565: INFO: Pod downward-api-1e1bbb8a-2fda-47ce-b73e-6404f168007b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:57:27.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4835" for this suite.
Aug 19 03:57:33.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:57:33.734: INFO: namespace downward-api-4835 deletion completed in 6.161079719s

• [SLOW TEST:12.498 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:57:33.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug 19 03:57:33.880: INFO: Waiting up to 5m0s for pod "client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511" in namespace "containers-6924" to be "success or failure"
Aug 19 03:57:33.901: INFO: Pod "client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511": Phase="Pending", Reason="", readiness=false. Elapsed: 21.551725ms
Aug 19 03:57:35.907: INFO: Pod "client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02703261s
Aug 19 03:57:37.912: INFO: Pod "client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032206115s
STEP: Saw pod success
Aug 19 03:57:37.912: INFO: Pod "client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511" satisfied condition "success or failure"
Aug 19 03:57:37.916: INFO: Trying to get logs from node iruya-worker2 pod client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511 container test-container: 
STEP: delete the pod
Aug 19 03:57:38.111: INFO: Waiting for pod client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511 to disappear
Aug 19 03:57:38.122: INFO: Pod client-containers-fcf54438-b2ff-40f9-a299-fc001af9e511 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:57:38.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6924" for this suite.
Aug 19 03:57:44.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:57:44.320: INFO: namespace containers-6924 deletion completed in 6.189718864s

• [SLOW TEST:10.585 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:57:44.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5ac6ecb9-79c0-4c96-b701-a0c24d516b35
STEP: Creating a pod to test consume secrets
Aug 19 03:57:44.637: INFO: Waiting up to 5m0s for pod "pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986" in namespace "secrets-1" to be "success or failure"
Aug 19 03:57:44.660: INFO: Pod "pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986": Phase="Pending", Reason="", readiness=false. Elapsed: 23.068536ms
Aug 19 03:57:46.703: INFO: Pod "pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065340347s
Aug 19 03:57:48.707: INFO: Pod "pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069661367s
Aug 19 03:57:50.712: INFO: Pod "pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075007404s
STEP: Saw pod success
Aug 19 03:57:50.713: INFO: Pod "pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986" satisfied condition "success or failure"
Aug 19 03:57:50.716: INFO: Trying to get logs from node iruya-worker pod pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986 container secret-volume-test: 
STEP: delete the pod
Aug 19 03:57:50.733: INFO: Waiting for pod pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986 to disappear
Aug 19 03:57:50.738: INFO: Pod pod-secrets-bd426690-fa9f-43fd-9090-d4113d038986 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:57:50.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1" for this suite.
Aug 19 03:57:56.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:57:56.872: INFO: namespace secrets-1 deletion completed in 6.124411056s

• [SLOW TEST:12.551 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:57:56.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:57:56.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7067" for this suite.
Aug 19 03:58:19.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:58:19.275: INFO: namespace kubelet-test-7067 deletion completed in 22.252305959s

• [SLOW TEST:22.402 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 03:58:19.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0819 03:58:20.305566       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 03:58:20.305: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 03:58:20.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2793" for this suite.
Aug 19 03:58:30.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 03:58:30.450: INFO: namespace gc-2793 deletion completed in 10.13648543s

• [SLOW TEST:11.174 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SAug 19 03:58:30.451: INFO: Running AfterSuite actions on all nodes
Aug 19 03:58:30.451: INFO: Running AfterSuite actions on node 1
Aug 19 03:58:30.451: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 7164.678 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS