I1221 12:56:08.522258 9 e2e.go:243] Starting e2e run "75c1bde1-df79-4ac6-8f79-c27ea85ea247" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576932967 - Will randomize all specs Will run 215 of 4412 specs Dec 21 12:56:08.779: INFO: >>> kubeConfig: /root/.kube/config Dec 21 12:56:08.784: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 21 12:56:08.834: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 21 12:56:08.867: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 21 12:56:08.867: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 21 12:56:08.867: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 21 12:56:08.884: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 21 12:56:08.884: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 21 12:56:08.884: INFO: e2e test version: v1.15.7 Dec 21 12:56:08.889: INFO: kube-apiserver version: v1.15.1 SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 21 12:56:08.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Dec 21 12:56:08.988: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-721bd90a-4575-4cad-96c3-58f08a776f3a STEP: Creating a pod to test consume secrets Dec 21 12:56:09.003: INFO: Waiting up to 5m0s for pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226" in namespace "secrets-7230" to be "success or failure" Dec 21 12:56:09.011: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226": Phase="Pending", Reason="", readiness=false. Elapsed: 7.558457ms Dec 21 12:56:11.021: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017204593s Dec 21 12:56:13.034: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030427355s Dec 21 12:56:15.041: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037461857s Dec 21 12:56:17.047: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043990943s Dec 21 12:56:19.052: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049024721s STEP: Saw pod success Dec 21 12:56:19.052: INFO: Pod "pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226" satisfied condition "success or failure" Dec 21 12:56:19.055: INFO: Trying to get logs from node iruya-node pod pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226 container secret-volume-test: STEP: delete the pod Dec 21 12:56:19.151: INFO: Waiting for pod pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226 to disappear Dec 21 12:56:19.329: INFO: Pod pod-secrets-cc78ad87-c59f-4466-a990-ae120cd98226 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 21 12:56:19.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7230" for this suite. Dec 21 12:56:25.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 12:56:25.540: INFO: namespace secrets-7230 deletion completed in 6.202861908s • [SLOW TEST:16.651 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 21 12:56:25.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 21 12:56:25.646: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 24.210415ms)
Dec 21 12:56:25.663: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.616704ms)
Dec 21 12:56:25.668: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.3238ms)
Dec 21 12:56:25.674: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.368401ms)
Dec 21 12:56:25.679: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.73039ms)
Dec 21 12:56:25.687: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.798541ms)
Dec 21 12:56:25.697: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.168098ms)
Dec 21 12:56:25.705: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.67634ms)
Dec 21 12:56:25.712: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.12579ms)
Dec 21 12:56:25.720: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.251136ms)
Dec 21 12:56:25.736: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.766222ms)
Dec 21 12:56:25.746: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.120411ms)
Dec 21 12:56:25.755: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.85001ms)
Dec 21 12:56:25.761: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.207237ms)
Dec 21 12:56:25.766: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.897756ms)
Dec 21 12:56:25.771: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.016273ms)
Dec 21 12:56:25.778: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.15221ms)
Dec 21 12:56:25.783: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.222226ms)
Dec 21 12:56:25.789: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.967109ms)
Dec 21 12:56:25.808: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.337822ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:56:25.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2450" for this suite.
Dec 21 12:56:31.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:56:31.954: INFO: namespace proxy-2450 deletion completed in 6.140644244s

• [SLOW TEST:6.413 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:56:31.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:56:42.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9994" for this suite.
Dec 21 12:57:28.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:57:28.342: INFO: namespace kubelet-test-9994 deletion completed in 46.166911261s

• [SLOW TEST:56.389 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:57:28.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 21 12:57:28.482: INFO: Waiting up to 5m0s for pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5" in namespace "containers-983" to be "success or failure"
Dec 21 12:57:28.518: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.710214ms
Dec 21 12:57:30.538: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055103169s
Dec 21 12:57:32.557: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073884268s
Dec 21 12:57:34.569: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086119444s
Dec 21 12:57:36.587: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103961026s
Dec 21 12:57:38.610: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127269886s
STEP: Saw pod success
Dec 21 12:57:38.610: INFO: Pod "client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5" satisfied condition "success or failure"
Dec 21 12:57:38.625: INFO: Trying to get logs from node iruya-node pod client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5 container test-container: 
STEP: delete the pod
Dec 21 12:57:38.737: INFO: Waiting for pod client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5 to disappear
Dec 21 12:57:38.743: INFO: Pod client-containers-f55518db-abf9-40eb-8bbc-bfc67276a1b5 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:57:38.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-983" for this suite.
Dec 21 12:57:44.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:57:44.914: INFO: namespace containers-983 deletion completed in 6.164184193s

• [SLOW TEST:16.571 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:57:44.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 21 12:57:45.609: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 21 12:57:45.649: INFO: Waiting for terminating namespaces to be deleted...
Dec 21 12:57:45.652: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 21 12:57:45.665: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 21 12:57:45.665: INFO: 	Container weave ready: true, restart count 0
Dec 21 12:57:45.665: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 12:57:45.665: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.665: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 12:57:45.665: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 21 12:57:45.679: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 21 12:57:45.679: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container coredns ready: true, restart count 0
Dec 21 12:57:45.679: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container etcd ready: true, restart count 0
Dec 21 12:57:45.679: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container weave ready: true, restart count 0
Dec 21 12:57:45.679: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 12:57:45.679: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container coredns ready: true, restart count 0
Dec 21 12:57:45.679: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 21 12:57:45.679: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 12:57:45.679: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 21 12:57:45.679: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e26444b01ecebe], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:57:46.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6214" for this suite.
Dec 21 12:57:52.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:57:52.865: INFO: namespace sched-pred-6214 deletion completed in 6.148890885s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.951 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:57:52.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 21 12:58:01.151: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:58:01.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2749" for this suite.
Dec 21 12:58:07.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:58:07.469: INFO: namespace container-runtime-2749 deletion completed in 6.154372839s

• [SLOW TEST:14.603 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:58:07.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 21 12:58:07.583: INFO: Waiting up to 5m0s for pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8" in namespace "emptydir-4708" to be "success or failure"
Dec 21 12:58:07.622: INFO: Pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8": Phase="Pending", Reason="", readiness=false. Elapsed: 39.085893ms
Dec 21 12:58:09.632: INFO: Pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048429632s
Dec 21 12:58:11.640: INFO: Pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057154801s
Dec 21 12:58:13.658: INFO: Pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074631154s
Dec 21 12:58:15.666: INFO: Pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083105579s
STEP: Saw pod success
Dec 21 12:58:15.667: INFO: Pod "pod-80092fb4-4088-4735-ba31-93a564ba97e8" satisfied condition "success or failure"
Dec 21 12:58:15.670: INFO: Trying to get logs from node iruya-node pod pod-80092fb4-4088-4735-ba31-93a564ba97e8 container test-container: 
STEP: delete the pod
Dec 21 12:58:15.734: INFO: Waiting for pod pod-80092fb4-4088-4735-ba31-93a564ba97e8 to disappear
Dec 21 12:58:15.809: INFO: Pod pod-80092fb4-4088-4735-ba31-93a564ba97e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:58:15.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4708" for this suite.
Dec 21 12:58:22.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:58:22.559: INFO: namespace emptydir-4708 deletion completed in 6.706785014s

• [SLOW TEST:15.090 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:58:22.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 21 12:58:22.760: INFO: Waiting up to 5m0s for pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821" in namespace "emptydir-2538" to be "success or failure"
Dec 21 12:58:22.803: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821": Phase="Pending", Reason="", readiness=false. Elapsed: 43.457709ms
Dec 21 12:58:24.861: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101288036s
Dec 21 12:58:26.870: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109898945s
Dec 21 12:58:28.885: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12458235s
Dec 21 12:58:30.894: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133712874s
Dec 21 12:58:32.967: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207109624s
STEP: Saw pod success
Dec 21 12:58:32.967: INFO: Pod "pod-4c2c4840-4788-4061-a0c6-86a957fa4821" satisfied condition "success or failure"
Dec 21 12:58:32.975: INFO: Trying to get logs from node iruya-node pod pod-4c2c4840-4788-4061-a0c6-86a957fa4821 container test-container: 
STEP: delete the pod
Dec 21 12:58:33.189: INFO: Waiting for pod pod-4c2c4840-4788-4061-a0c6-86a957fa4821 to disappear
Dec 21 12:58:33.195: INFO: Pod pod-4c2c4840-4788-4061-a0c6-86a957fa4821 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:58:33.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2538" for this suite.
Dec 21 12:58:39.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:58:39.402: INFO: namespace emptydir-2538 deletion completed in 6.201574866s

• [SLOW TEST:16.842 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:58:39.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 21 12:58:51.554: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 21 12:59:01.713: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:59:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8737" for this suite.
Dec 21 12:59:07.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:59:07.909: INFO: namespace pods-8737 deletion completed in 6.180055165s

• [SLOW TEST:28.507 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:59:07.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 12:59:07.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9372'
Dec 21 12:59:10.452: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 12:59:10.453: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 21 12:59:14.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9372'
Dec 21 12:59:14.678: INFO: stderr: ""
Dec 21 12:59:14.678: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:59:14.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9372" for this suite.
Dec 21 12:59:36.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:59:37.020: INFO: namespace kubectl-9372 deletion completed in 22.292677712s

• [SLOW TEST:29.111 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:59:37.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 12:59:37.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8374'
Dec 21 12:59:37.341: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 12:59:37.341: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 21 12:59:37.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8374'
Dec 21 12:59:37.591: INFO: stderr: ""
Dec 21 12:59:37.591: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:59:37.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8374" for this suite.
Dec 21 12:59:43.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:59:43.788: INFO: namespace kubectl-8374 deletion completed in 6.193046991s

• [SLOW TEST:6.768 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 12:59:43.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-19661269-290b-40ae-a24b-9bb19c0adfca
STEP: Creating a pod to test consume configMaps
Dec 21 12:59:43.984: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616" in namespace "configmap-9175" to be "success or failure"
Dec 21 12:59:44.123: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616": Phase="Pending", Reason="", readiness=false. Elapsed: 138.973506ms
Dec 21 12:59:46.132: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148109803s
Dec 21 12:59:48.153: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169096393s
Dec 21 12:59:50.160: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176398871s
Dec 21 12:59:52.169: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18474831s
Dec 21 12:59:54.175: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.19086989s
STEP: Saw pod success
Dec 21 12:59:54.175: INFO: Pod "pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616" satisfied condition "success or failure"
Dec 21 12:59:54.178: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616 container configmap-volume-test: 
STEP: delete the pod
Dec 21 12:59:54.312: INFO: Waiting for pod pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616 to disappear
Dec 21 12:59:54.325: INFO: Pod pod-configmaps-1ab235eb-618f-4218-ade9-0a48d0454616 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 12:59:54.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9175" for this suite.
Dec 21 13:00:00.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:00:00.487: INFO: namespace configmap-9175 deletion completed in 6.148059601s

• [SLOW TEST:16.698 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:00:00.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 21 13:00:01.135: INFO: created pod pod-service-account-defaultsa
Dec 21 13:00:01.135: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 21 13:00:01.178: INFO: created pod pod-service-account-mountsa
Dec 21 13:00:01.179: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 21 13:00:01.217: INFO: created pod pod-service-account-nomountsa
Dec 21 13:00:01.217: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 21 13:00:01.289: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 21 13:00:01.290: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 21 13:00:01.314: INFO: created pod pod-service-account-mountsa-mountspec
Dec 21 13:00:01.314: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 21 13:00:01.358: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 21 13:00:01.358: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 21 13:00:01.946: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 21 13:00:01.946: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 21 13:00:02.304: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 21 13:00:02.304: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 21 13:00:02.854: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 21 13:00:02.855: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:00:02.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9023" for this suite.
Dec 21 13:00:41.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:00:41.506: INFO: namespace svcaccounts-9023 deletion completed in 38.569473901s

• [SLOW TEST:41.018 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:00:41.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:00:41.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:00:49.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9081" for this suite.
Dec 21 13:01:41.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:01:42.038: INFO: namespace pods-9081 deletion completed in 52.191854919s

• [SLOW TEST:60.532 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:01:42.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 21 13:01:50.735: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6073 pod-service-account-bc05e138-5a8e-4249-a9e0-be298b9e6cf2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 21 13:01:51.200: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6073 pod-service-account-bc05e138-5a8e-4249-a9e0-be298b9e6cf2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 21 13:01:51.618: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6073 pod-service-account-bc05e138-5a8e-4249-a9e0-be298b9e6cf2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:01:52.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6073" for this suite.
Dec 21 13:01:58.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:01:58.287: INFO: namespace svcaccounts-6073 deletion completed in 6.156642978s

• [SLOW TEST:16.249 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:01:58.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 21 13:01:58.437: INFO: Waiting up to 5m0s for pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208" in namespace "downward-api-3977" to be "success or failure"
Dec 21 13:01:58.466: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208": Phase="Pending", Reason="", readiness=false. Elapsed: 28.411811ms
Dec 21 13:02:00.481: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043620377s
Dec 21 13:02:02.493: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056174951s
Dec 21 13:02:04.509: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07123222s
Dec 21 13:02:06.514: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076795127s
Dec 21 13:02:08.525: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087843944s
STEP: Saw pod success
Dec 21 13:02:08.525: INFO: Pod "downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208" satisfied condition "success or failure"
Dec 21 13:02:08.531: INFO: Trying to get logs from node iruya-node pod downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208 container dapi-container: 
STEP: delete the pod
Dec 21 13:02:08.638: INFO: Waiting for pod downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208 to disappear
Dec 21 13:02:08.661: INFO: Pod downward-api-4eb84bb9-6fe6-4dd1-8186-403ada9fc208 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:02:08.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3977" for this suite.
Dec 21 13:02:14.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:02:14.942: INFO: namespace downward-api-3977 deletion completed in 6.27505404s

• [SLOW TEST:16.654 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:02:14.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-c802fb89-e36d-435f-8858-9178d927ef6f
STEP: Creating secret with name secret-projected-all-test-volume-1885b84f-06a2-4716-bacc-a3581544251e
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 21 13:02:15.021: INFO: Waiting up to 5m0s for pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc" in namespace "projected-1703" to be "success or failure"
Dec 21 13:02:15.029: INFO: Pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148867ms
Dec 21 13:02:17.037: INFO: Pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015780285s
Dec 21 13:02:19.054: INFO: Pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033463728s
Dec 21 13:02:21.070: INFO: Pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049001068s
Dec 21 13:02:23.076: INFO: Pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054832468s
STEP: Saw pod success
Dec 21 13:02:23.076: INFO: Pod "projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc" satisfied condition "success or failure"
Dec 21 13:02:23.078: INFO: Trying to get logs from node iruya-node pod projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc container projected-all-volume-test: 
STEP: delete the pod
Dec 21 13:02:23.151: INFO: Waiting for pod projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc to disappear
Dec 21 13:02:23.258: INFO: Pod projected-volume-8bb11d3d-bff0-4eff-be48-ed99ca47a1cc no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:02:23.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1703" for this suite.
Dec 21 13:02:29.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:02:29.429: INFO: namespace projected-1703 deletion completed in 6.164867367s

• [SLOW TEST:14.487 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:02:29.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7877
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7877
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7877
Dec 21 13:02:29.719: INFO: Found 0 stateful pods, waiting for 1
Dec 21 13:02:39.889: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Dec 21 13:02:49.737: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 21 13:02:49.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 13:02:50.492: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 13:02:50.492: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 13:02:50.492: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 13:02:50.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 21 13:03:00.520: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 13:03:00.520: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 13:03:00.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999764s
Dec 21 13:03:01.556: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989448245s
Dec 21 13:03:02.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982740917s
Dec 21 13:03:03.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.974448483s
Dec 21 13:03:04.578: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.967796016s
Dec 21 13:03:05.586: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.961235429s
Dec 21 13:03:06.593: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.952646255s
Dec 21 13:03:07.604: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.946261537s
Dec 21 13:03:08.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.934555785s
Dec 21 13:03:09.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 918.508774ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7877
Dec 21 13:03:10.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 13:03:11.295: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 13:03:11.295: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 13:03:11.295: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 13:03:11.299: INFO: Found 2 stateful pods, waiting for 3
Dec 21 13:03:21.316: INFO: Found 2 stateful pods, waiting for 3
Dec 21 13:03:31.309: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:03:31.309: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:03:31.309: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 21 13:03:31.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 13:03:31.903: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 13:03:31.903: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 13:03:31.903: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 13:03:31.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 13:03:32.472: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 13:03:32.472: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 13:03:32.472: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 13:03:32.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 13:03:33.109: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 13:03:33.109: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 13:03:33.109: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 13:03:33.109: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 13:03:33.140: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 21 13:03:43.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 13:03:43.157: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 13:03:43.157: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 13:03:43.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999696s
Dec 21 13:03:44.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993869954s
Dec 21 13:03:45.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986879701s
Dec 21 13:03:46.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974144277s
Dec 21 13:03:47.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.957367893s
Dec 21 13:03:48.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.950223607s
Dec 21 13:03:49.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.938893741s
Dec 21 13:03:50.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.293972419s
Dec 21 13:03:51.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.264499435s
Dec 21 13:03:52.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 251.54611ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7877
Dec 21 13:03:53.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 13:03:54.742: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 13:03:54.742: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 13:03:54.742: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 13:03:54.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 13:03:55.212: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 13:03:55.213: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 13:03:55.213: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 13:03:55.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7877 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 13:03:55.685: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 13:03:55.685: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 13:03:55.685: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 13:03:55.685: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 21 13:04:25.717: INFO: Deleting all statefulset in ns statefulset-7877
Dec 21 13:04:25.723: INFO: Scaling statefulset ss to 0
Dec 21 13:04:25.741: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 13:04:25.744: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:04:25.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7877" for this suite.
Dec 21 13:04:33.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:04:33.985: INFO: namespace statefulset-7877 deletion completed in 8.156771842s

• [SLOW TEST:124.556 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:04:33.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:04:34.095: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 21 13:04:34.184: INFO: Number of nodes with available pods: 0
Dec 21 13:04:34.184: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 21 13:04:34.276: INFO: Number of nodes with available pods: 0
Dec 21 13:04:34.276: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:35.286: INFO: Number of nodes with available pods: 0
Dec 21 13:04:35.286: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:36.287: INFO: Number of nodes with available pods: 0
Dec 21 13:04:36.287: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:37.315: INFO: Number of nodes with available pods: 0
Dec 21 13:04:37.315: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:38.283: INFO: Number of nodes with available pods: 0
Dec 21 13:04:38.283: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:39.461: INFO: Number of nodes with available pods: 0
Dec 21 13:04:39.461: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:40.332: INFO: Number of nodes with available pods: 0
Dec 21 13:04:40.332: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:41.829: INFO: Number of nodes with available pods: 0
Dec 21 13:04:41.829: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:42.286: INFO: Number of nodes with available pods: 0
Dec 21 13:04:42.286: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:43.282: INFO: Number of nodes with available pods: 1
Dec 21 13:04:43.282: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 21 13:04:43.434: INFO: Number of nodes with available pods: 1
Dec 21 13:04:43.434: INFO: Number of running nodes: 0, number of available pods: 1
Dec 21 13:04:44.442: INFO: Number of nodes with available pods: 0
Dec 21 13:04:44.442: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 21 13:04:44.608: INFO: Number of nodes with available pods: 0
Dec 21 13:04:44.608: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:45.615: INFO: Number of nodes with available pods: 0
Dec 21 13:04:45.615: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:46.622: INFO: Number of nodes with available pods: 0
Dec 21 13:04:46.622: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:47.619: INFO: Number of nodes with available pods: 0
Dec 21 13:04:47.619: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:48.617: INFO: Number of nodes with available pods: 0
Dec 21 13:04:48.617: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:49.617: INFO: Number of nodes with available pods: 0
Dec 21 13:04:49.617: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:50.626: INFO: Number of nodes with available pods: 0
Dec 21 13:04:50.627: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:51.617: INFO: Number of nodes with available pods: 0
Dec 21 13:04:51.617: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:52.620: INFO: Number of nodes with available pods: 0
Dec 21 13:04:52.620: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:53.618: INFO: Number of nodes with available pods: 0
Dec 21 13:04:53.618: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:54.619: INFO: Number of nodes with available pods: 0
Dec 21 13:04:54.619: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:55.619: INFO: Number of nodes with available pods: 0
Dec 21 13:04:55.619: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:56.691: INFO: Number of nodes with available pods: 0
Dec 21 13:04:56.691: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:57.618: INFO: Number of nodes with available pods: 0
Dec 21 13:04:57.618: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:58.628: INFO: Number of nodes with available pods: 0
Dec 21 13:04:58.628: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:04:59.622: INFO: Number of nodes with available pods: 0
Dec 21 13:04:59.622: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:00.617: INFO: Number of nodes with available pods: 0
Dec 21 13:05:00.617: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:01.615: INFO: Number of nodes with available pods: 0
Dec 21 13:05:01.615: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:03.244: INFO: Number of nodes with available pods: 0
Dec 21 13:05:03.244: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:03.621: INFO: Number of nodes with available pods: 0
Dec 21 13:05:03.621: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:04.618: INFO: Number of nodes with available pods: 0
Dec 21 13:05:04.618: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:05.627: INFO: Number of nodes with available pods: 0
Dec 21 13:05:05.627: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:06.629: INFO: Number of nodes with available pods: 0
Dec 21 13:05:06.629: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:05:07.622: INFO: Number of nodes with available pods: 1
Dec 21 13:05:07.622: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4340, will wait for the garbage collector to delete the pods
Dec 21 13:05:07.700: INFO: Deleting DaemonSet.extensions daemon-set took: 9.297304ms
Dec 21 13:05:08.001: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.368012ms
Dec 21 13:05:15.657: INFO: Number of nodes with available pods: 0
Dec 21 13:05:15.657: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 13:05:15.664: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4340/daemonsets","resourceVersion":"17511363"},"items":null}

Dec 21 13:05:15.668: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4340/pods","resourceVersion":"17511363"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:05:15.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4340" for this suite.
Dec 21 13:05:21.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:05:21.995: INFO: namespace daemonsets-4340 deletion completed in 6.244364738s

• [SLOW TEST:48.009 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:05:21.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 21 13:05:22.110: INFO: Waiting up to 5m0s for pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84" in namespace "downward-api-724" to be "success or failure"
Dec 21 13:05:22.136: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84": Phase="Pending", Reason="", readiness=false. Elapsed: 25.868544ms
Dec 21 13:05:24.147: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036539371s
Dec 21 13:05:26.157: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046545633s
Dec 21 13:05:28.162: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051799068s
Dec 21 13:05:30.168: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057606967s
Dec 21 13:05:32.206: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096512497s
STEP: Saw pod success
Dec 21 13:05:32.207: INFO: Pod "downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84" satisfied condition "success or failure"
Dec 21 13:05:32.213: INFO: Trying to get logs from node iruya-node pod downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84 container dapi-container: 
STEP: delete the pod
Dec 21 13:05:32.364: INFO: Waiting for pod downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84 to disappear
Dec 21 13:05:32.375: INFO: Pod downward-api-a422bab5-a0a6-415a-aa9f-5b83bbd23a84 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:05:32.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-724" for this suite.
Dec 21 13:05:38.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:05:38.543: INFO: namespace downward-api-724 deletion completed in 6.161915537s

• [SLOW TEST:16.548 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:05:38.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 21 13:05:39.564: INFO: Waiting up to 5m0s for pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751" in namespace "emptydir-220" to be "success or failure"
Dec 21 13:05:39.574: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Pending", Reason="", readiness=false. Elapsed: 9.701785ms
Dec 21 13:05:41.582: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017121388s
Dec 21 13:05:43.610: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045033792s
Dec 21 13:05:45.624: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059246857s
Dec 21 13:05:47.630: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065714779s
Dec 21 13:05:49.637: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072046876s
Dec 21 13:05:51.645: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080630987s
STEP: Saw pod success
Dec 21 13:05:51.645: INFO: Pod "pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751" satisfied condition "success or failure"
Dec 21 13:05:51.650: INFO: Trying to get logs from node iruya-node pod pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751 container test-container: 
STEP: delete the pod
Dec 21 13:05:51.926: INFO: Waiting for pod pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751 to disappear
Dec 21 13:05:51.938: INFO: Pod pod-c3bc2bbe-3154-4562-ba21-f7b4b035a751 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:05:51.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-220" for this suite.
Dec 21 13:05:58.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:05:58.122: INFO: namespace emptydir-220 deletion completed in 6.178705538s

• [SLOW TEST:19.579 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:05:58.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 13:05:58.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9646'
Dec 21 13:05:58.445: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 13:05:58.445: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 21 13:05:58.491: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-n24rz]
Dec 21 13:05:58.491: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-n24rz" in namespace "kubectl-9646" to be "running and ready"
Dec 21 13:05:58.501: INFO: Pod "e2e-test-nginx-rc-n24rz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.661773ms
Dec 21 13:06:00.508: INFO: Pod "e2e-test-nginx-rc-n24rz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017411111s
Dec 21 13:06:02.519: INFO: Pod "e2e-test-nginx-rc-n24rz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027779056s
Dec 21 13:06:04.529: INFO: Pod "e2e-test-nginx-rc-n24rz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037669747s
Dec 21 13:06:06.546: INFO: Pod "e2e-test-nginx-rc-n24rz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054920399s
Dec 21 13:06:08.559: INFO: Pod "e2e-test-nginx-rc-n24rz": Phase="Running", Reason="", readiness=true. Elapsed: 10.068083096s
Dec 21 13:06:08.559: INFO: Pod "e2e-test-nginx-rc-n24rz" satisfied condition "running and ready"
Dec 21 13:06:08.559: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-n24rz]
Dec 21 13:06:08.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9646'
Dec 21 13:06:08.690: INFO: stderr: ""
Dec 21 13:06:08.690: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 21 13:06:08.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9646'
Dec 21 13:06:08.786: INFO: stderr: ""
Dec 21 13:06:08.786: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:06:08.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9646" for this suite.
Dec 21 13:06:30.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:06:30.947: INFO: namespace kubectl-9646 deletion completed in 22.15634877s

• [SLOW TEST:32.824 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:06:30.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 21 13:06:42.242: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:06:42.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3215" for this suite.
Dec 21 13:06:48.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:06:48.517: INFO: namespace container-runtime-3215 deletion completed in 6.163768093s

• [SLOW TEST:17.570 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:06:48.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-30b6c80d-8b06-4a30-bed1-f8bcdcd00cfc
STEP: Creating a pod to test consume secrets
Dec 21 13:06:48.732: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89" in namespace "projected-5017" to be "success or failure"
Dec 21 13:06:48.739: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754717ms
Dec 21 13:06:50.754: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021268515s
Dec 21 13:06:52.760: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027459s
Dec 21 13:06:54.776: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043392495s
Dec 21 13:06:56.784: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052114271s
Dec 21 13:06:58.800: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067451632s
Dec 21 13:07:00.810: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.077421684s
STEP: Saw pod success
Dec 21 13:07:00.810: INFO: Pod "pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89" satisfied condition "success or failure"
Dec 21 13:07:00.813: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 13:07:00.874: INFO: Waiting for pod pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89 to disappear
Dec 21 13:07:00.893: INFO: Pod pod-projected-secrets-9e4c4f1c-7ffb-486d-9b5f-0479a67beb89 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:07:00.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5017" for this suite.
Dec 21 13:07:06.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:07:07.100: INFO: namespace projected-5017 deletion completed in 6.130252253s

• [SLOW TEST:18.583 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:07:07.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 21 13:07:35.381: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:35.381: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:35.799: INFO: Exec stderr: ""
Dec 21 13:07:35.799: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:35.799: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:36.128: INFO: Exec stderr: ""
Dec 21 13:07:36.128: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:36.128: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:36.546: INFO: Exec stderr: ""
Dec 21 13:07:36.547: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:36.547: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:37.044: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 21 13:07:37.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:37.044: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:37.335: INFO: Exec stderr: ""
Dec 21 13:07:37.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:37.335: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:37.547: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 21 13:07:37.547: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:37.547: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:37.809: INFO: Exec stderr: ""
Dec 21 13:07:37.809: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:37.809: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:38.217: INFO: Exec stderr: ""
Dec 21 13:07:38.217: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:38.217: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:38.578: INFO: Exec stderr: ""
Dec 21 13:07:38.579: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5423 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:07:38.579: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:07:38.825: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:07:38.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5423" for this suite.
Dec 21 13:08:26.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:08:26.974: INFO: namespace e2e-kubelet-etc-hosts-5423 deletion completed in 48.14004079s

• [SLOW TEST:79.874 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:08:26.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:08:27.240: INFO: Create a RollingUpdate DaemonSet
Dec 21 13:08:27.246: INFO: Check that daemon pods launch on every node of the cluster
Dec 21 13:08:27.255: INFO: Number of nodes with available pods: 0
Dec 21 13:08:27.255: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:28.275: INFO: Number of nodes with available pods: 0
Dec 21 13:08:28.275: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:29.826: INFO: Number of nodes with available pods: 0
Dec 21 13:08:29.826: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:30.271: INFO: Number of nodes with available pods: 0
Dec 21 13:08:30.271: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:31.303: INFO: Number of nodes with available pods: 0
Dec 21 13:08:31.304: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:32.284: INFO: Number of nodes with available pods: 0
Dec 21 13:08:32.284: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:35.415: INFO: Number of nodes with available pods: 0
Dec 21 13:08:35.415: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:36.271: INFO: Number of nodes with available pods: 0
Dec 21 13:08:36.271: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:37.535: INFO: Number of nodes with available pods: 0
Dec 21 13:08:37.535: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:38.265: INFO: Number of nodes with available pods: 0
Dec 21 13:08:38.265: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:08:39.269: INFO: Number of nodes with available pods: 1
Dec 21 13:08:39.269: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:08:40.270: INFO: Number of nodes with available pods: 2
Dec 21 13:08:40.270: INFO: Number of running nodes: 2, number of available pods: 2
Dec 21 13:08:40.270: INFO: Update the DaemonSet to trigger a rollout
Dec 21 13:08:40.282: INFO: Updating DaemonSet daemon-set
Dec 21 13:08:47.315: INFO: Roll back the DaemonSet before rollout is complete
Dec 21 13:08:47.323: INFO: Updating DaemonSet daemon-set
Dec 21 13:08:47.323: INFO: Make sure DaemonSet rollback is complete
Dec 21 13:08:47.336: INFO: Wrong image for pod: daemon-set-crpjh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 21 13:08:47.336: INFO: Pod daemon-set-crpjh is not available
Dec 21 13:08:48.452: INFO: Wrong image for pod: daemon-set-crpjh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 21 13:08:48.452: INFO: Pod daemon-set-crpjh is not available
Dec 21 13:08:49.455: INFO: Wrong image for pod: daemon-set-crpjh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 21 13:08:49.455: INFO: Pod daemon-set-crpjh is not available
Dec 21 13:08:50.458: INFO: Wrong image for pod: daemon-set-crpjh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 21 13:08:50.458: INFO: Pod daemon-set-crpjh is not available
Dec 21 13:08:51.450: INFO: Wrong image for pod: daemon-set-crpjh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 21 13:08:51.450: INFO: Pod daemon-set-crpjh is not available
Dec 21 13:08:52.499: INFO: Pod daemon-set-4s552 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5215, will wait for the garbage collector to delete the pods
Dec 21 13:08:52.581: INFO: Deleting DaemonSet.extensions daemon-set took: 12.314663ms
Dec 21 13:08:52.881: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.463291ms
Dec 21 13:09:00.788: INFO: Number of nodes with available pods: 0
Dec 21 13:09:00.788: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 13:09:00.794: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5215/daemonsets","resourceVersion":"17511933"},"items":null}

Dec 21 13:09:00.798: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5215/pods","resourceVersion":"17511933"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:09:00.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5215" for this suite.
Dec 21 13:09:06.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:09:06.997: INFO: namespace daemonsets-5215 deletion completed in 6.124863394s

• [SLOW TEST:40.023 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:09:06.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:09:07.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d" in namespace "downward-api-6329" to be "success or failure"
Dec 21 13:09:07.171: INFO: Pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d": Phase="Pending", Reason="", readiness=false. Elapsed: 86.800863ms
Dec 21 13:09:09.179: INFO: Pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094801045s
Dec 21 13:09:11.192: INFO: Pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107369079s
Dec 21 13:09:13.198: INFO: Pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113603296s
Dec 21 13:09:15.208: INFO: Pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123740016s
STEP: Saw pod success
Dec 21 13:09:15.208: INFO: Pod "downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d" satisfied condition "success or failure"
Dec 21 13:09:15.213: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d container client-container: 
STEP: delete the pod
Dec 21 13:09:15.281: INFO: Waiting for pod downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d to disappear
Dec 21 13:09:15.284: INFO: Pod downwardapi-volume-41cb8854-672f-4da7-ad91-9dbfbaf4325d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:09:15.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6329" for this suite.
Dec 21 13:09:21.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:09:21.557: INFO: namespace downward-api-6329 deletion completed in 6.194230678s

• [SLOW TEST:14.560 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:09:21.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:09:21.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f" in namespace "downward-api-6842" to be "success or failure"
Dec 21 13:09:21.732: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.310734ms
Dec 21 13:09:23.741: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055257421s
Dec 21 13:09:25.749: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063201706s
Dec 21 13:09:27.762: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076345297s
Dec 21 13:09:29.812: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126117966s
Dec 21 13:09:31.836: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150516587s
Dec 21 13:09:33.843: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.15689265s
STEP: Saw pod success
Dec 21 13:09:33.843: INFO: Pod "downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f" satisfied condition "success or failure"
Dec 21 13:09:33.846: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f container client-container: 
STEP: delete the pod
Dec 21 13:09:34.189: INFO: Waiting for pod downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f to disappear
Dec 21 13:09:34.201: INFO: Pod downwardapi-volume-c96eb2c5-3219-4b39-a765-29e6da90929f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:09:34.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6842" for this suite.
Dec 21 13:09:40.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:09:40.366: INFO: namespace downward-api-6842 deletion completed in 6.15427629s

• [SLOW TEST:18.808 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:09:40.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 21 13:09:40.541: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512061,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 21 13:09:40.542: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512061,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 21 13:09:50.568: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512075,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 21 13:09:50.568: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512075,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 21 13:10:00.761: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512089,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 13:10:00.761: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512089,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 21 13:10:10.772: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512103,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 13:10:10.773: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-a,UID:e89b372e-d66f-48e1-9900-ad7fec501ae6,ResourceVersion:17512103,Generation:0,CreationTimestamp:2019-12-21 13:09:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 21 13:10:20.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-b,UID:a2ad3d81-7398-4e56-ba5b-55c8d3f81d33,ResourceVersion:17512117,Generation:0,CreationTimestamp:2019-12-21 13:10:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 21 13:10:20.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-b,UID:a2ad3d81-7398-4e56-ba5b-55c8d3f81d33,ResourceVersion:17512117,Generation:0,CreationTimestamp:2019-12-21 13:10:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 21 13:10:30.813: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-b,UID:a2ad3d81-7398-4e56-ba5b-55c8d3f81d33,ResourceVersion:17512132,Generation:0,CreationTimestamp:2019-12-21 13:10:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 21 13:10:30.813: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9540,SelfLink:/api/v1/namespaces/watch-9540/configmaps/e2e-watch-test-configmap-b,UID:a2ad3d81-7398-4e56-ba5b-55c8d3f81d33,ResourceVersion:17512132,Generation:0,CreationTimestamp:2019-12-21 13:10:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:10:40.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9540" for this suite.
Dec 21 13:10:46.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:10:47.093: INFO: namespace watch-9540 deletion completed in 6.219133497s

• [SLOW TEST:66.727 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:10:47.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 21 13:11:07.310: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:07.315: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:09.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:09.340: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:11.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:11.320: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:13.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:13.328: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:15.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:15.323: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:17.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:17.320: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:19.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:19.325: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:21.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:21.329: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:23.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:23.327: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:25.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:25.326: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:27.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:27.332: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:29.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:29.421: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:31.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:31.323: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:33.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:33.327: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:35.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:35.327: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 21 13:11:37.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 21 13:11:37.326: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:11:37.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3185" for this suite.
Dec 21 13:11:59.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:11:59.643: INFO: namespace container-lifecycle-hook-3185 deletion completed in 22.24332798s

• [SLOW TEST:72.550 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:11:59.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d1bc210f-91df-45c4-b91a-71b02921f35b
STEP: Creating a pod to test consume configMaps
Dec 21 13:11:59.990: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1" in namespace "configmap-9896" to be "success or failure"
Dec 21 13:12:00.006: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.296188ms
Dec 21 13:12:02.014: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024134402s
Dec 21 13:12:04.026: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036571032s
Dec 21 13:12:06.035: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045540524s
Dec 21 13:12:08.042: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052767396s
Dec 21 13:12:10.049: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059152883s
STEP: Saw pod success
Dec 21 13:12:10.049: INFO: Pod "pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1" satisfied condition "success or failure"
Dec 21 13:12:10.051: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1 container configmap-volume-test: 
STEP: delete the pod
Dec 21 13:12:10.113: INFO: Waiting for pod pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1 to disappear
Dec 21 13:12:10.234: INFO: Pod pod-configmaps-fb18a6d5-18c7-4c7f-b8a5-b13a66dd05b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:12:10.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9896" for this suite.
Dec 21 13:12:16.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:12:16.372: INFO: namespace configmap-9896 deletion completed in 6.128313118s

• [SLOW TEST:16.728 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:12:16.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 21 13:12:16.490: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:12:38.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7531" for this suite.
Dec 21 13:12:44.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:12:44.672: INFO: namespace init-container-7531 deletion completed in 6.320679646s

• [SLOW TEST:28.300 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:12:44.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 21 13:12:52.568: INFO: 0 pods remaining
Dec 21 13:12:52.568: INFO: 0 pods has nil DeletionTimestamp
Dec 21 13:12:52.568: INFO: 
STEP: Gathering metrics
W1221 13:12:53.419210       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 13:12:53.419: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:12:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6352" for this suite.
Dec 21 13:13:03.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:13:03.639: INFO: namespace gc-6352 deletion completed in 10.213271324s

• [SLOW TEST:18.966 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:13:03.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 21 13:13:03.842: INFO: Number of nodes with available pods: 0
Dec 21 13:13:03.842: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:05.678: INFO: Number of nodes with available pods: 0
Dec 21 13:13:05.678: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:06.289: INFO: Number of nodes with available pods: 0
Dec 21 13:13:06.289: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:08.297: INFO: Number of nodes with available pods: 0
Dec 21 13:13:08.297: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:08.879: INFO: Number of nodes with available pods: 0
Dec 21 13:13:08.880: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:09.928: INFO: Number of nodes with available pods: 0
Dec 21 13:13:09.928: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:11.294: INFO: Number of nodes with available pods: 0
Dec 21 13:13:11.294: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:12.599: INFO: Number of nodes with available pods: 0
Dec 21 13:13:12.599: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:13.167: INFO: Number of nodes with available pods: 0
Dec 21 13:13:13.167: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:13.937: INFO: Number of nodes with available pods: 0
Dec 21 13:13:13.937: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:14.874: INFO: Number of nodes with available pods: 0
Dec 21 13:13:14.874: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:13:15.913: INFO: Number of nodes with available pods: 1
Dec 21 13:13:15.913: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:16.863: INFO: Number of nodes with available pods: 2
Dec 21 13:13:16.863: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 21 13:13:16.934: INFO: Number of nodes with available pods: 1
Dec 21 13:13:16.934: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:17.945: INFO: Number of nodes with available pods: 1
Dec 21 13:13:17.945: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:18.954: INFO: Number of nodes with available pods: 1
Dec 21 13:13:18.954: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:19.956: INFO: Number of nodes with available pods: 1
Dec 21 13:13:19.956: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:21.160: INFO: Number of nodes with available pods: 1
Dec 21 13:13:21.160: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:21.950: INFO: Number of nodes with available pods: 1
Dec 21 13:13:21.950: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:22.949: INFO: Number of nodes with available pods: 1
Dec 21 13:13:22.949: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:23.951: INFO: Number of nodes with available pods: 1
Dec 21 13:13:23.951: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:25.004: INFO: Number of nodes with available pods: 1
Dec 21 13:13:25.004: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:26.079: INFO: Number of nodes with available pods: 1
Dec 21 13:13:26.079: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 13:13:26.965: INFO: Number of nodes with available pods: 2
Dec 21 13:13:26.965: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-495, will wait for the garbage collector to delete the pods
Dec 21 13:13:27.052: INFO: Deleting DaemonSet.extensions daemon-set took: 17.255708ms
Dec 21 13:13:27.453: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.462473ms
Dec 21 13:13:37.858: INFO: Number of nodes with available pods: 0
Dec 21 13:13:37.858: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 13:13:37.961: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-495/daemonsets","resourceVersion":"17512647"},"items":null}

Dec 21 13:13:37.965: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-495/pods","resourceVersion":"17512647"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:13:37.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-495" for this suite.
Dec 21 13:13:44.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:13:44.167: INFO: namespace daemonsets-495 deletion completed in 6.188078961s

• [SLOW TEST:40.529 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:13:44.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:13:44.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3732" for this suite.
Dec 21 13:14:06.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:14:06.622: INFO: namespace pods-3732 deletion completed in 22.217781871s

• [SLOW TEST:22.454 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:14:06.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 21 13:14:06.794: INFO: Waiting up to 5m0s for pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c" in namespace "var-expansion-2879" to be "success or failure"
Dec 21 13:14:06.805: INFO: Pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155335ms
Dec 21 13:14:08.815: INFO: Pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020732214s
Dec 21 13:14:10.834: INFO: Pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039882325s
Dec 21 13:14:12.870: INFO: Pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07552174s
Dec 21 13:14:14.903: INFO: Pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108340963s
STEP: Saw pod success
Dec 21 13:14:14.903: INFO: Pod "var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c" satisfied condition "success or failure"
Dec 21 13:14:14.919: INFO: Trying to get logs from node iruya-node pod var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c container dapi-container: 
STEP: delete the pod
Dec 21 13:14:15.061: INFO: Waiting for pod var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c to disappear
Dec 21 13:14:15.065: INFO: Pod var-expansion-77b975f2-99b7-436d-893e-2c7e0e93838c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:14:15.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2879" for this suite.
Dec 21 13:14:21.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:14:21.174: INFO: namespace var-expansion-2879 deletion completed in 6.102002358s

• [SLOW TEST:14.551 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:14:21.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 21 13:14:21.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3007'
Dec 21 13:14:23.735: INFO: stderr: ""
Dec 21 13:14:23.735: INFO: stdout: "pod/pause created\n"
Dec 21 13:14:23.735: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 21 13:14:23.735: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3007" to be "running and ready"
Dec 21 13:14:23.748: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.33709ms
Dec 21 13:14:25.758: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022580541s
Dec 21 13:14:27.774: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039162036s
Dec 21 13:14:30.011: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275570515s
Dec 21 13:14:32.019: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284303178s
Dec 21 13:14:34.036: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.300982535s
Dec 21 13:14:34.036: INFO: Pod "pause" satisfied condition "running and ready"
Dec 21 13:14:34.036: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 21 13:14:34.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3007'
Dec 21 13:14:34.169: INFO: stderr: ""
Dec 21 13:14:34.169: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 21 13:14:34.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3007'
Dec 21 13:14:34.292: INFO: stderr: ""
Dec 21 13:14:34.293: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 21 13:14:34.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3007'
Dec 21 13:14:34.444: INFO: stderr: ""
Dec 21 13:14:34.444: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 21 13:14:34.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3007'
Dec 21 13:14:34.565: INFO: stderr: ""
Dec 21 13:14:34.565: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 21 13:14:34.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3007'
Dec 21 13:14:34.837: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 13:14:34.837: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 21 13:14:34.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3007'
Dec 21 13:14:34.979: INFO: stderr: "No resources found.\n"
Dec 21 13:14:34.979: INFO: stdout: ""
Dec 21 13:14:34.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3007 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 13:14:35.094: INFO: stderr: ""
Dec 21 13:14:35.094: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:14:35.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3007" for this suite.
Dec 21 13:14:41.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:14:41.862: INFO: namespace kubectl-3007 deletion completed in 6.763235404s

• [SLOW TEST:20.688 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:14:41.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-2d4286bb-8519-46c5-9c23-9991cb3c2da3
STEP: Creating configMap with name cm-test-opt-upd-8080fcc4-6a21-4ca9-acb8-0abc9039eac4
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2d4286bb-8519-46c5-9c23-9991cb3c2da3
STEP: Updating configmap cm-test-opt-upd-8080fcc4-6a21-4ca9-acb8-0abc9039eac4
STEP: Creating configMap with name cm-test-opt-create-5f265755-03d8-4d15-852c-a11409806b26
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:15:00.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3834" for this suite.
Dec 21 13:15:24.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:15:24.645: INFO: namespace projected-3834 deletion completed in 24.141576733s

• [SLOW TEST:42.782 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:15:24.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-08f23f5d-641e-48d7-9fe0-16e3caf76b6a
STEP: Creating secret with name s-test-opt-upd-60203cce-27b9-4935-8ac9-3186642fb483
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-08f23f5d-641e-48d7-9fe0-16e3caf76b6a
STEP: Updating secret s-test-opt-upd-60203cce-27b9-4935-8ac9-3186642fb483
STEP: Creating secret with name s-test-opt-create-d6491a1c-d7cb-44a4-9ea9-8b55f894286e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:17:05.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7097" for this suite.
Dec 21 13:17:27.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:17:27.489: INFO: namespace secrets-7097 deletion completed in 22.134621438s

• [SLOW TEST:122.845 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:17:27.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0a5d79dc-87d8-44c2-af19-2d0935869441
STEP: Creating a pod to test consume configMaps
Dec 21 13:17:27.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b" in namespace "projected-6245" to be "success or failure"
Dec 21 13:17:27.648: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.983858ms
Dec 21 13:17:29.661: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023522259s
Dec 21 13:17:31.673: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034710008s
Dec 21 13:17:33.686: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047685684s
Dec 21 13:17:35.693: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055440722s
Dec 21 13:17:37.699: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06155133s
STEP: Saw pod success
Dec 21 13:17:37.699: INFO: Pod "pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b" satisfied condition "success or failure"
Dec 21 13:17:37.702: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 13:17:37.856: INFO: Waiting for pod pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b to disappear
Dec 21 13:17:37.870: INFO: Pod pod-projected-configmaps-cc9774c7-2ebc-450a-b985-9763ce1f602b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:17:37.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6245" for this suite.
Dec 21 13:17:43.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:17:44.152: INFO: namespace projected-6245 deletion completed in 6.25871779s

• [SLOW TEST:16.663 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:17:44.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 13:17:44.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3247'
Dec 21 13:17:44.883: INFO: stderr: ""
Dec 21 13:17:44.883: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 21 13:17:44.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3247'
Dec 21 13:17:56.624: INFO: stderr: ""
Dec 21 13:17:56.624: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:17:56.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3247" for this suite.
Dec 21 13:18:02.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:18:02.762: INFO: namespace kubectl-3247 deletion completed in 6.100689821s

• [SLOW TEST:18.610 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:18:02.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 21 13:18:02.885: INFO: Waiting up to 5m0s for pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40" in namespace "var-expansion-7044" to be "success or failure"
Dec 21 13:18:02.919: INFO: Pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40": Phase="Pending", Reason="", readiness=false. Elapsed: 33.704784ms
Dec 21 13:18:04.933: INFO: Pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047359234s
Dec 21 13:18:06.939: INFO: Pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054292105s
Dec 21 13:18:08.946: INFO: Pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060330653s
Dec 21 13:18:10.954: INFO: Pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068822288s
STEP: Saw pod success
Dec 21 13:18:10.954: INFO: Pod "var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40" satisfied condition "success or failure"
Dec 21 13:18:10.959: INFO: Trying to get logs from node iruya-node pod var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40 container dapi-container: 
STEP: delete the pod
Dec 21 13:18:11.072: INFO: Waiting for pod var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40 to disappear
Dec 21 13:18:11.087: INFO: Pod var-expansion-d9227a19-7bb6-4407-8a97-d80bee53ea40 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:18:11.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7044" for this suite.
Dec 21 13:18:17.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:18:17.415: INFO: namespace var-expansion-7044 deletion completed in 6.297671076s

• [SLOW TEST:14.652 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:18:17.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7771
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 21 13:18:17.570: INFO: Found 0 stateful pods, waiting for 3
Dec 21 13:18:27.580: INFO: Found 1 stateful pods, waiting for 3
Dec 21 13:18:37.581: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:18:37.581: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:18:37.581: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 21 13:18:47.593: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:18:47.593: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:18:47.593: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 21 13:18:47.641: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 21 13:18:57.765: INFO: Updating stateful set ss2
Dec 21 13:18:57.936: INFO: Waiting for Pod statefulset-7771/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 21 13:19:08.435: INFO: Found 2 stateful pods, waiting for 3
Dec 21 13:19:18.453: INFO: Found 2 stateful pods, waiting for 3
Dec 21 13:19:28.449: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:19:28.449: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:19:28.449: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 21 13:19:28.515: INFO: Updating stateful set ss2
Dec 21 13:19:28.534: INFO: Waiting for Pod statefulset-7771/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:19:38.602: INFO: Updating stateful set ss2
Dec 21 13:19:39.252: INFO: Waiting for StatefulSet statefulset-7771/ss2 to complete update
Dec 21 13:19:39.252: INFO: Waiting for Pod statefulset-7771/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:19:49.260: INFO: Waiting for StatefulSet statefulset-7771/ss2 to complete update
Dec 21 13:19:49.260: INFO: Waiting for Pod statefulset-7771/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:19:59.272: INFO: Waiting for StatefulSet statefulset-7771/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 21 13:20:09.270: INFO: Deleting all statefulset in ns statefulset-7771
Dec 21 13:20:09.273: INFO: Scaling statefulset ss2 to 0
Dec 21 13:20:39.397: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 13:20:39.403: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:20:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7771" for this suite.
Dec 21 13:20:47.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:20:47.613: INFO: namespace statefulset-7771 deletion completed in 8.162964655s

• [SLOW TEST:150.198 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:20:47.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:20:58.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5923" for this suite.
Dec 21 13:21:48.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:21:48.401: INFO: namespace kubelet-test-5923 deletion completed in 50.244094316s

• [SLOW TEST:60.788 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:21:48.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c32b1f0f-f7cf-441a-9983-b85b1afe1d1e
STEP: Creating a pod to test consume configMaps
Dec 21 13:21:48.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375" in namespace "configmap-5980" to be "success or failure"
Dec 21 13:21:48.536: INFO: Pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375": Phase="Pending", Reason="", readiness=false. Elapsed: 45.504755ms
Dec 21 13:21:50.552: INFO: Pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061965999s
Dec 21 13:21:52.560: INFO: Pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069413061s
Dec 21 13:21:54.599: INFO: Pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108776462s
Dec 21 13:21:56.620: INFO: Pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129642105s
STEP: Saw pod success
Dec 21 13:21:56.620: INFO: Pod "pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375" satisfied condition "success or failure"
Dec 21 13:21:56.629: INFO: Trying to get logs from node iruya-node pod pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375 container configmap-volume-test: 
STEP: delete the pod
Dec 21 13:21:56.756: INFO: Waiting for pod pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375 to disappear
Dec 21 13:21:56.773: INFO: Pod pod-configmaps-828f2e88-8c19-4714-b0e7-bd6598f47375 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:21:56.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5980" for this suite.
Dec 21 13:22:02.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:22:03.269: INFO: namespace configmap-5980 deletion completed in 6.486764517s

• [SLOW TEST:14.867 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:22:03.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 21 13:22:03.368: INFO: namespace kubectl-1493
Dec 21 13:22:03.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1493'
Dec 21 13:22:03.954: INFO: stderr: ""
Dec 21 13:22:03.954: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 21 13:22:04.962: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:04.962: INFO: Found 0 / 1
Dec 21 13:22:05.964: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:05.965: INFO: Found 0 / 1
Dec 21 13:22:06.967: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:06.967: INFO: Found 0 / 1
Dec 21 13:22:07.964: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:07.964: INFO: Found 0 / 1
Dec 21 13:22:08.963: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:08.963: INFO: Found 0 / 1
Dec 21 13:22:09.966: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:09.966: INFO: Found 0 / 1
Dec 21 13:22:10.961: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:10.961: INFO: Found 0 / 1
Dec 21 13:22:11.961: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:11.961: INFO: Found 1 / 1
Dec 21 13:22:11.961: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 21 13:22:11.965: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 13:22:11.965: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 21 13:22:11.965: INFO: wait on redis-master startup in kubectl-1493 
Dec 21 13:22:11.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qk6rt redis-master --namespace=kubectl-1493'
Dec 21 13:22:12.144: INFO: stderr: ""
Dec 21 13:22:12.144: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Dec 13:22:10.629 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Dec 13:22:10.629 # Server started, Redis version 3.2.12\n1:M 21 Dec 13:22:10.629 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Dec 13:22:10.629 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 21 13:22:12.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1493'
Dec 21 13:22:12.306: INFO: stderr: ""
Dec 21 13:22:12.306: INFO: stdout: "service/rm2 exposed\n"
Dec 21 13:22:12.311: INFO: Service rm2 in namespace kubectl-1493 found.
STEP: exposing service
Dec 21 13:22:14.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1493'
Dec 21 13:22:14.589: INFO: stderr: ""
Dec 21 13:22:14.589: INFO: stdout: "service/rm3 exposed\n"
Dec 21 13:22:14.602: INFO: Service rm3 in namespace kubectl-1493 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:22:16.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1493" for this suite.
Dec 21 13:22:38.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:22:38.901: INFO: namespace kubectl-1493 deletion completed in 22.259308305s

• [SLOW TEST:35.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:22:38.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-2464/secret-test-23cffd3c-4234-4d73-bddb-65b2a82e2a40
STEP: Creating a pod to test consume secrets
Dec 21 13:22:39.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f" in namespace "secrets-2464" to be "success or failure"
Dec 21 13:22:39.160: INFO: Pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.77746ms
Dec 21 13:22:41.169: INFO: Pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01662119s
Dec 21 13:22:43.175: INFO: Pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022805136s
Dec 21 13:22:45.191: INFO: Pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038155556s
Dec 21 13:22:47.208: INFO: Pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054962176s
STEP: Saw pod success
Dec 21 13:22:47.208: INFO: Pod "pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f" satisfied condition "success or failure"
Dec 21 13:22:47.215: INFO: Trying to get logs from node iruya-node pod pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f container env-test: 
STEP: delete the pod
Dec 21 13:22:47.431: INFO: Waiting for pod pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f to disappear
Dec 21 13:22:47.456: INFO: Pod pod-configmaps-caa31fac-1454-47ed-9307-cfc2678ee18f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:22:47.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2464" for this suite.
Dec 21 13:22:53.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:22:53.625: INFO: namespace secrets-2464 deletion completed in 6.160627448s

• [SLOW TEST:14.724 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:22:53.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 21 13:22:53.751: INFO: Waiting up to 5m0s for pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852" in namespace "emptydir-1862" to be "success or failure"
Dec 21 13:22:53.761: INFO: Pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852": Phase="Pending", Reason="", readiness=false. Elapsed: 10.001145ms
Dec 21 13:22:55.771: INFO: Pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020684206s
Dec 21 13:22:57.779: INFO: Pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028623582s
Dec 21 13:22:59.792: INFO: Pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041579181s
Dec 21 13:23:01.806: INFO: Pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054955285s
STEP: Saw pod success
Dec 21 13:23:01.806: INFO: Pod "pod-35dd4b38-e022-45b2-a89f-0d03743d3852" satisfied condition "success or failure"
Dec 21 13:23:01.811: INFO: Trying to get logs from node iruya-node pod pod-35dd4b38-e022-45b2-a89f-0d03743d3852 container test-container: 
STEP: delete the pod
Dec 21 13:23:01.987: INFO: Waiting for pod pod-35dd4b38-e022-45b2-a89f-0d03743d3852 to disappear
Dec 21 13:23:01.992: INFO: Pod pod-35dd4b38-e022-45b2-a89f-0d03743d3852 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:23:01.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1862" for this suite.
Dec 21 13:23:08.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:23:08.171: INFO: namespace emptydir-1862 deletion completed in 6.151474631s

• [SLOW TEST:14.545 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:23:08.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 21 13:23:08.298: INFO: Waiting up to 5m0s for pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9" in namespace "emptydir-2457" to be "success or failure"
Dec 21 13:23:08.328: INFO: Pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.712784ms
Dec 21 13:23:10.338: INFO: Pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039692325s
Dec 21 13:23:12.343: INFO: Pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044950149s
Dec 21 13:23:14.361: INFO: Pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062540721s
Dec 21 13:23:16.564: INFO: Pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.266048562s
STEP: Saw pod success
Dec 21 13:23:16.564: INFO: Pod "pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9" satisfied condition "success or failure"
Dec 21 13:23:16.571: INFO: Trying to get logs from node iruya-node pod pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9 container test-container: 
STEP: delete the pod
Dec 21 13:23:16.646: INFO: Waiting for pod pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9 to disappear
Dec 21 13:23:16.650: INFO: Pod pod-e2a834b3-5582-4620-8c52-cc37c36b1ed9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:23:16.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2457" for this suite.
Dec 21 13:23:22.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:23:22.878: INFO: namespace emptydir-2457 deletion completed in 6.224529262s

• [SLOW TEST:14.707 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:23:22.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 21 13:23:23.028: INFO: Waiting up to 5m0s for pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5" in namespace "emptydir-7357" to be "success or failure"
Dec 21 13:23:23.051: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.170753ms
Dec 21 13:23:25.058: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029320255s
Dec 21 13:23:27.064: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03586982s
Dec 21 13:23:29.071: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043242418s
Dec 21 13:23:31.083: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5": Phase="Running", Reason="", readiness=true. Elapsed: 8.054931424s
Dec 21 13:23:33.089: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06111249s
STEP: Saw pod success
Dec 21 13:23:33.089: INFO: Pod "pod-afad1222-3705-4d6c-b1db-14b6d8addbf5" satisfied condition "success or failure"
Dec 21 13:23:33.093: INFO: Trying to get logs from node iruya-node pod pod-afad1222-3705-4d6c-b1db-14b6d8addbf5 container test-container: 
STEP: delete the pod
Dec 21 13:23:33.208: INFO: Waiting for pod pod-afad1222-3705-4d6c-b1db-14b6d8addbf5 to disappear
Dec 21 13:23:33.213: INFO: Pod pod-afad1222-3705-4d6c-b1db-14b6d8addbf5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:23:33.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7357" for this suite.
Dec 21 13:23:39.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:23:39.483: INFO: namespace emptydir-7357 deletion completed in 6.261430072s

• [SLOW TEST:16.604 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:23:39.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 21 13:26:38.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:38.955: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:40.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:40.961: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:42.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:42.963: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:44.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:44.964: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:46.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:46.980: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:48.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:48.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:50.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:50.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:52.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:52.966: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:54.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:54.966: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:56.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:56.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:26:58.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:26:58.966: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:00.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:00.978: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:02.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:02.967: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:04.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:04.964: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:06.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:06.964: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:08.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:08.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:10.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:10.975: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:12.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:12.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:14.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:14.963: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:16.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:16.959: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:18.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:18.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:20.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:20.968: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:22.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:22.970: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:24.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:24.963: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:26.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:26.961: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:28.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:28.975: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:30.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:30.962: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:32.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:32.967: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:34.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:34.968: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:36.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:36.965: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:38.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:38.969: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:40.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:40.976: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:42.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:42.967: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:44.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:44.965: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:46.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:46.963: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:48.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:48.969: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:50.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:50.970: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:52.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:52.967: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:54.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:54.965: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:56.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:56.983: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:27:58.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:27:58.965: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:28:00.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:28:00.963: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:28:02.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:28:02.971: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:28:04.956: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:28:04.968: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 13:28:06.955: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 13:28:06.962: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:28:06.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7284" for this suite.
Dec 21 13:28:28.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:28:29.118: INFO: namespace container-lifecycle-hook-7284 deletion completed in 22.149900271s

• [SLOW TEST:289.634 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:28:29.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-188796de-a488-4e11-a12d-47825a21f044
STEP: Creating a pod to test consume secrets
Dec 21 13:28:29.211: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806" in namespace "projected-5794" to be "success or failure"
Dec 21 13:28:29.244: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806": Phase="Pending", Reason="", readiness=false. Elapsed: 33.452746ms
Dec 21 13:28:31.253: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042044641s
Dec 21 13:28:33.260: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048750284s
Dec 21 13:28:35.270: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058855337s
Dec 21 13:28:37.296: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806": Phase="Running", Reason="", readiness=true. Elapsed: 8.085592883s
Dec 21 13:28:39.304: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093122276s
STEP: Saw pod success
Dec 21 13:28:39.304: INFO: Pod "pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806" satisfied condition "success or failure"
Dec 21 13:28:39.314: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 13:28:39.422: INFO: Waiting for pod pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806 to disappear
Dec 21 13:28:39.444: INFO: Pod pod-projected-secrets-66e735f1-54b7-4d9e-8dc7-2ca883572806 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:28:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5794" for this suite.
Dec 21 13:28:45.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:28:45.613: INFO: namespace projected-5794 deletion completed in 6.161186493s

• [SLOW TEST:16.495 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:28:45.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:28:45.685: INFO: Creating deployment "nginx-deployment"
Dec 21 13:28:45.732: INFO: Waiting for observed generation 1
Dec 21 13:28:48.134: INFO: Waiting for all required pods to come up
Dec 21 13:28:48.147: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 21 13:29:15.512: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 21 13:29:15.521: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 21 13:29:15.540: INFO: Updating deployment nginx-deployment
Dec 21 13:29:15.540: INFO: Waiting for observed generation 2
Dec 21 13:29:17.838: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 21 13:29:17.864: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 21 13:29:18.341: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 21 13:29:18.622: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 21 13:29:18.623: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 21 13:29:18.672: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 21 13:29:18.679: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 21 13:29:18.679: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 21 13:29:18.690: INFO: Updating deployment nginx-deployment
Dec 21 13:29:18.690: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 21 13:29:20.874: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 21 13:29:20.904: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 21 13:29:22.548: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6592,SelfLink:/apis/apps/v1/namespaces/deployment-6592/deployments/nginx-deployment,UID:ce269778-d4c6-453e-ba84-f7e9bff3d791,ResourceVersion:17514835,Generation:3,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-21 13:29:18 +0000 UTC 2019-12-21 13:28:45 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-21 13:29:20 +0000 UTC 2019-12-21 13:29:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 21 13:29:24.745: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6592,SelfLink:/apis/apps/v1/namespaces/deployment-6592/replicasets/nginx-deployment-55fb7cb77f,UID:fc80bc58-3917-4600-852c-e32249591b72,ResourceVersion:17514824,Generation:3,CreationTimestamp:2019-12-21 13:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ce269778-d4c6-453e-ba84-f7e9bff3d791 0xc003156237 0xc003156238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:29:24.745: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 21 13:29:24.745: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6592,SelfLink:/apis/apps/v1/namespaces/deployment-6592/replicasets/nginx-deployment-7b8c6f4498,UID:1c94ff73-20cb-4a16-8411-0a22f71a23d4,ResourceVersion:17514868,Generation:3,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ce269778-d4c6-453e-ba84-f7e9bff3d791 0xc003156307 0xc003156308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 21 13:29:26.132: INFO: Pod "nginx-deployment-55fb7cb77f-6tnqk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6tnqk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-6tnqk,UID:82b547dd-0847-48bb-9794-1d5ef415c8d7,ResourceVersion:17514876,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002645ea7 0xc002645ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002645f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002645f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.133: INFO: Pod "nginx-deployment-55fb7cb77f-88swj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-88swj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-88swj,UID:aa8c5603-0efe-4100-8af7-a5b4b513ed56,ResourceVersion:17514882,Generation:0,CreationTimestamp:2019-12-21 13:29:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002645fc7 0xc002645fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.133: INFO: Pod "nginx-deployment-55fb7cb77f-blvsm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-blvsm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-blvsm,UID:48fa2952-7ae1-4620-a932-e87366f56605,ResourceVersion:17514861,Generation:0,CreationTimestamp:2019-12-21 13:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc0023483f7 0xc0023483f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.133: INFO: Pod "nginx-deployment-55fb7cb77f-c95n7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c95n7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-c95n7,UID:a36e1b5e-b42d-4291-805c-2cbdf5ed9bb3,ResourceVersion:17514874,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348517 0xc002348518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023485b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.133: INFO: Pod "nginx-deployment-55fb7cb77f-c9m26" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c9m26,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-c9m26,UID:8512d9d9-e5e2-47b1-8cc7-b094a5f9f7a8,ResourceVersion:17514801,Generation:0,CreationTimestamp:2019-12-21 13:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348637 0xc002348638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023486b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023486d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-21 13:29:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.133: INFO: Pod "nginx-deployment-55fb7cb77f-dmqkm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dmqkm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-dmqkm,UID:526a0a95-eae2-42ec-b470-b539e4c8a2e0,ResourceVersion:17514789,Generation:0,CreationTimestamp:2019-12-21 13:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc0023487b7 0xc0023487b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-21 13:29:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.134: INFO: Pod "nginx-deployment-55fb7cb77f-gtzlp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gtzlp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-gtzlp,UID:2d2e1419-e8e2-4aee-8fdc-a6f3f52b1270,ResourceVersion:17514838,Generation:0,CreationTimestamp:2019-12-21 13:29:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348927 0xc002348928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023489b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.134: INFO: Pod "nginx-deployment-55fb7cb77f-mspl7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mspl7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-mspl7,UID:db085d16-e038-4970-b891-a36f1ee83598,ResourceVersion:17514863,Generation:0,CreationTimestamp:2019-12-21 13:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348a37 0xc002348a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348ab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.134: INFO: Pod "nginx-deployment-55fb7cb77f-qr2zn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qr2zn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-qr2zn,UID:f5571936-7495-4808-8e74-c8907f5f7e14,ResourceVersion:17514878,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348b57 0xc002348b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.134: INFO: Pod "nginx-deployment-55fb7cb77f-qs5b9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qs5b9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-qs5b9,UID:93a34ffa-263b-4925-a660-9a7400bf09d0,ResourceVersion:17514796,Generation:0,CreationTimestamp:2019-12-21 13:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348c77 0xc002348c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-21 13:29:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.135: INFO: Pod "nginx-deployment-55fb7cb77f-rhl4j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rhl4j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-rhl4j,UID:e38284c2-96a5-4212-9b84-7f5f5c190796,ResourceVersion:17514873,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348dd7 0xc002348dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.135: INFO: Pod "nginx-deployment-55fb7cb77f-s2hmp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s2hmp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-s2hmp,UID:237a1a2a-04cc-4852-91a9-3925dbf3bcaf,ResourceVersion:17514819,Generation:0,CreationTimestamp:2019-12-21 13:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002348ee7 0xc002348ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002348f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002348f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-21 13:29:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.135: INFO: Pod "nginx-deployment-55fb7cb77f-tw54j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tw54j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-55fb7cb77f-tw54j,UID:3ddfc2d3-b906-4a26-84ed-48805da61b85,ResourceVersion:17514834,Generation:0,CreationTimestamp:2019-12-21 13:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fc80bc58-3917-4600-852c-e32249591b72 0xc002349047 0xc002349048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023490c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023490e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-21 13:29:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.136: INFO: Pod "nginx-deployment-7b8c6f4498-4njmw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4njmw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-4njmw,UID:cafa6051-ab5c-4ae3-9b53-dedc4b54c0d8,ResourceVersion:17514872,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc0023491b7 0xc0023491b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.136: INFO: Pod "nginx-deployment-7b8c6f4498-4rz47" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4rz47,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-4rz47,UID:3c9d4420-39c9-43f0-95ea-aab2359ec859,ResourceVersion:17514886,Generation:0,CreationTimestamp:2019-12-21 13:29:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc0023492d7 0xc0023492d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-21 13:29:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.136: INFO: Pod "nginx-deployment-7b8c6f4498-5khns" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5khns,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-5khns,UID:fa3a7e8a-44ad-4f7b-8ee6-de58814ebccd,ResourceVersion:17514730,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349447 0xc002349448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023494c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023494e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://63617216fec99482af23a2b0f65414b0e35243e8f288665444c08deed92ec322}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.136: INFO: Pod "nginx-deployment-7b8c6f4498-5xmtr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5xmtr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-5xmtr,UID:4e15f94a-fa8c-4986-96b6-da7a0e3b9cae,ResourceVersion:17514756,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc0023495b7 0xc0023495b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://241d4b651d8aae506fa2d1fc7807eb211a15d624cc83c2fe37cafb51166bb2f0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.137: INFO: Pod "nginx-deployment-7b8c6f4498-67srh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-67srh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-67srh,UID:15fc146b-317a-42c0-8aa6-96a84a6677e2,ResourceVersion:17514877,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349737 0xc002349738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023497a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023497c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.137: INFO: Pod "nginx-deployment-7b8c6f4498-72tkt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-72tkt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-72tkt,UID:091e3c7a-36eb-4959-ace6-2adbe9c646b7,ResourceVersion:17514864,Generation:0,CreationTimestamp:2019-12-21 13:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349847 0xc002349848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023498c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023498e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.137: INFO: Pod "nginx-deployment-7b8c6f4498-9cvt4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9cvt4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-9cvt4,UID:9df75f6f-4d57-4784-8bc3-d65b29ac7cd7,ResourceVersion:17514753,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349967 0xc002349968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023499e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fc337125437d7830881a00c93c929e1879b3725ec9b540d4f63e4f9fad7974ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.137: INFO: Pod "nginx-deployment-7b8c6f4498-bcn8w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bcn8w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-bcn8w,UID:8cbf6497-758c-4990-bd9b-73fd8377f869,ResourceVersion:17514715,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349ad7 0xc002349ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3f7fbdbeee9c1dc9d04c351a1b368382fa04df2843c13c858ab9278e31344fd0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.137: INFO: Pod "nginx-deployment-7b8c6f4498-blvxm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-blvxm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-blvxm,UID:c1e1d613-40b3-4872-89e5-e6fa7351a1e4,ResourceVersion:17514746,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349c47 0xc002349c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-21 13:28:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ea265d828a6b93230847db3fa1d7e80f0a8564e2d61ededdde4007aa5dd31f9e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.137: INFO: Pod "nginx-deployment-7b8c6f4498-dxw7j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dxw7j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-dxw7j,UID:da88bdac-6953-4c20-9e85-6a1d75ec6caa,ResourceVersion:17514889,Generation:0,CreationTimestamp:2019-12-21 13:29:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349dd7 0xc002349dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-21 13:29:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-f7hvf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f7hvf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-f7hvf,UID:2db40ac2-c895-490c-8920-28fd21f90574,ResourceVersion:17514836,Generation:0,CreationTimestamp:2019-12-21 13:29:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc002349f27 0xc002349f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002349f90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002349fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-jlk8j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jlk8j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-jlk8j,UID:564b49d8-a912-42ef-991c-f798cd21be76,ResourceVersion:17514731,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f02037 0xc001f02038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f020a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f020c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e6c8f9ea7c32b275a4ee91e9e0267d86f02c8c7a82b1d67630a9405be9351c59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-lhd2m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lhd2m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-lhd2m,UID:57f2a473-4075-4dba-bd5e-02b7b4f6d68b,ResourceVersion:17514860,Generation:0,CreationTimestamp:2019-12-21 13:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f02197 0xc001f02198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-q4ksp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q4ksp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-q4ksp,UID:218d03c1-0c74-47b5-8af5-e537c1988038,ResourceVersion:17514859,Generation:0,CreationTimestamp:2019-12-21 13:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f022b7 0xc001f022b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-q968g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q968g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-q968g,UID:6999a036-1145-4196-a917-74d1d5b8aef3,ResourceVersion:17514870,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f023c7 0xc001f023c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-r58qg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r58qg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-r58qg,UID:70bd956d-9c6c-42f5-85c1-78e7fe1226d7,ResourceVersion:17514879,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f024d7 0xc001f024d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-r5p9x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r5p9x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-r5p9x,UID:ab254084-e421-4943-b54c-b818e4c9427b,ResourceVersion:17514875,Generation:0,CreationTimestamp:2019-12-21 13:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f025e7 0xc001f025e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-rctw8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rctw8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-rctw8,UID:171fc33d-657b-407f-b6c0-821b74221a71,ResourceVersion:17514741,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f02717 0xc001f02718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f027b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8016160edf9c9e983d22251fe24c455751cddde11fc88b3a7984195a7beaa44b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-t5f6n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t5f6n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-t5f6n,UID:8f003ed0-8733-465d-a129-f99ba95fe34a,ResourceVersion:17514737,Generation:0,CreationTimestamp:2019-12-21 13:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f02887 0xc001f02888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:28:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-21 13:28:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:29:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0ebde2388532289c05906c5157ca12e70fbfd4299f49f1d4cc88d65a2707155f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:29:26.138: INFO: Pod "nginx-deployment-7b8c6f4498-z2qkf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z2qkf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6592,SelfLink:/api/v1/namespaces/deployment-6592/pods/nginx-deployment-7b8c6f4498-z2qkf,UID:6ac635bb-109a-4a92-8756-66cd1a5ce156,ResourceVersion:17514862,Generation:0,CreationTimestamp:2019-12-21 13:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1c94ff73-20cb-4a16-8411-0a22f71a23d4 0xc001f029f7 0xc001f029f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rvgn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvgn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rvgn7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f02a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f02a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:29:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:29:26.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6592" for this suite.
Dec 21 13:30:23.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:30:23.677: INFO: namespace deployment-6592 deletion completed in 54.93846632s

• [SLOW TEST:98.063 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:30:23.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 21 13:30:33.001: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:30:33.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8574" for this suite.
Dec 21 13:30:39.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:30:39.171: INFO: namespace container-runtime-8574 deletion completed in 6.091256599s

• [SLOW TEST:15.493 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:30:39.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 21 13:30:39.317: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8332,SelfLink:/api/v1/namespaces/watch-8332/configmaps/e2e-watch-test-watch-closed,UID:eef387e0-a48e-49d0-a35e-7768290af883,ResourceVersion:17515189,Generation:0,CreationTimestamp:2019-12-21 13:30:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 21 13:30:39.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8332,SelfLink:/api/v1/namespaces/watch-8332/configmaps/e2e-watch-test-watch-closed,UID:eef387e0-a48e-49d0-a35e-7768290af883,ResourceVersion:17515190,Generation:0,CreationTimestamp:2019-12-21 13:30:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 21 13:30:39.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8332,SelfLink:/api/v1/namespaces/watch-8332/configmaps/e2e-watch-test-watch-closed,UID:eef387e0-a48e-49d0-a35e-7768290af883,ResourceVersion:17515191,Generation:0,CreationTimestamp:2019-12-21 13:30:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 13:30:39.341: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8332,SelfLink:/api/v1/namespaces/watch-8332/configmaps/e2e-watch-test-watch-closed,UID:eef387e0-a48e-49d0-a35e-7768290af883,ResourceVersion:17515192,Generation:0,CreationTimestamp:2019-12-21 13:30:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:30:39.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8332" for this suite.
Dec 21 13:30:45.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:30:45.507: INFO: namespace watch-8332 deletion completed in 6.148702214s

• [SLOW TEST:6.337 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:30:45.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-8f9a97cb-d16d-4836-ac93-374a8a4979cd
STEP: Creating a pod to test consume secrets
Dec 21 13:30:45.665: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593" in namespace "projected-6611" to be "success or failure"
Dec 21 13:30:45.674: INFO: Pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593": Phase="Pending", Reason="", readiness=false. Elapsed: 8.677883ms
Dec 21 13:30:47.742: INFO: Pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076738116s
Dec 21 13:30:49.751: INFO: Pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08621788s
Dec 21 13:30:51.786: INFO: Pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121334741s
Dec 21 13:30:54.118: INFO: Pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.452766214s
STEP: Saw pod success
Dec 21 13:30:54.118: INFO: Pod "pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593" satisfied condition "success or failure"
Dec 21 13:30:54.122: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593 container secret-volume-test: 
STEP: delete the pod
Dec 21 13:30:54.280: INFO: Waiting for pod pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593 to disappear
Dec 21 13:30:54.293: INFO: Pod pod-projected-secrets-e13544ee-ceb3-4a92-a9f0-59f340c62593 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:30:54.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6611" for this suite.
Dec 21 13:31:00.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:31:00.514: INFO: namespace projected-6611 deletion completed in 6.205857853s

• [SLOW TEST:15.007 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:31:00.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 21 13:31:09.344: INFO: Successfully updated pod "annotationupdate1ac62624-4643-4a81-b96e-fcea54566cb1"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:31:13.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7261" for this suite.
Dec 21 13:31:35.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:31:35.599: INFO: namespace downward-api-7261 deletion completed in 22.138587189s

• [SLOW TEST:35.084 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:31:35.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-06aa4ca1-84e7-42fe-bdaf-e47b36c9c0ac
STEP: Creating a pod to test consume configMaps
Dec 21 13:31:35.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823" in namespace "configmap-576" to be "success or failure"
Dec 21 13:31:35.757: INFO: Pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823": Phase="Pending", Reason="", readiness=false. Elapsed: 18.493933ms
Dec 21 13:31:37.765: INFO: Pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02684791s
Dec 21 13:31:39.781: INFO: Pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042105369s
Dec 21 13:31:41.807: INFO: Pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067995277s
Dec 21 13:31:43.821: INFO: Pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082520706s
STEP: Saw pod success
Dec 21 13:31:43.821: INFO: Pod "pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823" satisfied condition "success or failure"
Dec 21 13:31:43.829: INFO: Trying to get logs from node iruya-node pod pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823 container configmap-volume-test: 
STEP: delete the pod
Dec 21 13:31:43.941: INFO: Waiting for pod pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823 to disappear
Dec 21 13:31:43.947: INFO: Pod pod-configmaps-30262266-0e1e-4581-b4ea-61f7ec685823 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:31:43.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-576" for this suite.
Dec 21 13:31:50.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:31:50.107: INFO: namespace configmap-576 deletion completed in 6.128799995s

• [SLOW TEST:14.508 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:31:50.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5763ea8a-9c6b-4c45-b483-9826ef01723a
STEP: Creating a pod to test consume secrets
Dec 21 13:31:50.245: INFO: Waiting up to 5m0s for pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712" in namespace "secrets-185" to be "success or failure"
Dec 21 13:31:50.256: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712": Phase="Pending", Reason="", readiness=false. Elapsed: 11.604852ms
Dec 21 13:31:52.265: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02014374s
Dec 21 13:31:54.272: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026959268s
Dec 21 13:31:56.288: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043528033s
Dec 21 13:31:58.297: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052024383s
Dec 21 13:32:00.306: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061360473s
STEP: Saw pod success
Dec 21 13:32:00.306: INFO: Pod "pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712" satisfied condition "success or failure"
Dec 21 13:32:00.313: INFO: Trying to get logs from node iruya-node pod pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712 container secret-volume-test: 
STEP: delete the pod
Dec 21 13:32:00.393: INFO: Waiting for pod pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712 to disappear
Dec 21 13:32:00.462: INFO: Pod pod-secrets-bc564e78-00ad-46ea-a989-6860cb17b712 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:32:00.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-185" for this suite.
Dec 21 13:32:06.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:32:06.713: INFO: namespace secrets-185 deletion completed in 6.237260261s

• [SLOW TEST:16.605 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:32:06.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-c1335895-b345-4410-89cd-e277c8188336
STEP: Creating a pod to test consume secrets
Dec 21 13:32:06.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108" in namespace "projected-4974" to be "success or failure"
Dec 21 13:32:06.966: INFO: Pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108": Phase="Pending", Reason="", readiness=false. Elapsed: 45.21723ms
Dec 21 13:32:08.979: INFO: Pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058250809s
Dec 21 13:32:10.987: INFO: Pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066352162s
Dec 21 13:32:12.998: INFO: Pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077691905s
Dec 21 13:32:15.004: INFO: Pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083642454s
STEP: Saw pod success
Dec 21 13:32:15.004: INFO: Pod "pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108" satisfied condition "success or failure"
Dec 21 13:32:15.008: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 13:32:15.062: INFO: Waiting for pod pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108 to disappear
Dec 21 13:32:15.070: INFO: Pod pod-projected-secrets-c3d50d50-b23a-434f-baf6-35c20c779108 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:32:15.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4974" for this suite.
Dec 21 13:32:21.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:32:21.206: INFO: namespace projected-4974 deletion completed in 6.132707883s

• [SLOW TEST:14.494 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:32:21.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 21 13:32:21.288: INFO: Waiting up to 5m0s for pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a" in namespace "var-expansion-9380" to be "success or failure"
Dec 21 13:32:21.329: INFO: Pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.412054ms
Dec 21 13:32:23.340: INFO: Pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051863487s
Dec 21 13:32:25.348: INFO: Pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059525494s
Dec 21 13:32:27.356: INFO: Pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067190052s
Dec 21 13:32:29.362: INFO: Pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073029321s
STEP: Saw pod success
Dec 21 13:32:29.362: INFO: Pod "var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a" satisfied condition "success or failure"
Dec 21 13:32:29.365: INFO: Trying to get logs from node iruya-node pod var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a container dapi-container: 
STEP: delete the pod
Dec 21 13:32:29.515: INFO: Waiting for pod var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a to disappear
Dec 21 13:32:29.555: INFO: Pod var-expansion-80912f0b-9807-4947-8657-7cdc9f39775a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:32:29.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9380" for this suite.
Dec 21 13:32:35.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:32:35.747: INFO: namespace var-expansion-9380 deletion completed in 6.15461829s

• [SLOW TEST:14.540 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:32:35.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:32:35.834: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.771862ms)
Dec 21 13:32:35.885: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.383611ms)
Dec 21 13:32:35.909: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.528378ms)
Dec 21 13:32:35.918: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.115942ms)
Dec 21 13:32:35.929: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.136597ms)
Dec 21 13:32:35.944: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.743774ms)
Dec 21 13:32:35.952: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.844573ms)
Dec 21 13:32:35.959: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.646477ms)
Dec 21 13:32:35.969: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.719608ms)
Dec 21 13:32:35.985: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.453559ms)
Dec 21 13:32:35.995: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.414633ms)
Dec 21 13:32:36.001: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.895085ms)
Dec 21 13:32:36.006: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.820644ms)
Dec 21 13:32:36.013: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.154305ms)
Dec 21 13:32:36.020: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.496191ms)
Dec 21 13:32:36.035: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.936326ms)
Dec 21 13:32:36.044: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.83837ms)
Dec 21 13:32:36.048: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.568086ms)
Dec 21 13:32:36.056: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.262979ms)
Dec 21 13:32:36.062: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.289207ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:32:36.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1336" for this suite.
Dec 21 13:32:42.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:32:42.312: INFO: namespace proxy-1336 deletion completed in 6.246643377s

• [SLOW TEST:6.565 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:32:42.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:32:42.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877" in namespace "downward-api-4738" to be "success or failure"
Dec 21 13:32:42.437: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877": Phase="Pending", Reason="", readiness=false. Elapsed: 62.782308ms
Dec 21 13:32:44.445: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070765266s
Dec 21 13:32:46.460: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085406299s
Dec 21 13:32:48.469: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093990339s
Dec 21 13:32:50.478: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103212326s
Dec 21 13:32:52.490: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115878804s
STEP: Saw pod success
Dec 21 13:32:52.491: INFO: Pod "downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877" satisfied condition "success or failure"
Dec 21 13:32:52.496: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877 container client-container: 
STEP: delete the pod
Dec 21 13:32:52.593: INFO: Waiting for pod downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877 to disappear
Dec 21 13:32:52.600: INFO: Pod downwardapi-volume-58c150a2-2b2c-42b4-bd4f-71b75e381877 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:32:52.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4738" for this suite.
Dec 21 13:32:58.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:32:58.780: INFO: namespace downward-api-4738 deletion completed in 6.173395385s

• [SLOW TEST:16.468 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:32:58.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-49bg
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 13:32:58.965: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-49bg" in namespace "subpath-568" to be "success or failure"
Dec 21 13:32:58.978: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Pending", Reason="", readiness=false. Elapsed: 13.138642ms
Dec 21 13:33:00.991: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026070568s
Dec 21 13:33:03.000: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035392465s
Dec 21 13:33:05.006: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040823312s
Dec 21 13:33:07.013: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047891411s
Dec 21 13:33:09.020: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 10.054829023s
Dec 21 13:33:11.027: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 12.062142776s
Dec 21 13:33:13.035: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 14.069857109s
Dec 21 13:33:15.041: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 16.076524225s
Dec 21 13:33:17.052: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 18.08669714s
Dec 21 13:33:19.059: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 20.094449506s
Dec 21 13:33:21.066: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 22.101149465s
Dec 21 13:33:23.075: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 24.109574227s
Dec 21 13:33:25.081: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 26.116153771s
Dec 21 13:33:27.087: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Running", Reason="", readiness=true. Elapsed: 28.121754564s
Dec 21 13:33:29.094: INFO: Pod "pod-subpath-test-projected-49bg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.129439374s
STEP: Saw pod success
Dec 21 13:33:29.094: INFO: Pod "pod-subpath-test-projected-49bg" satisfied condition "success or failure"
Dec 21 13:33:29.100: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-49bg container test-container-subpath-projected-49bg: 
STEP: delete the pod
Dec 21 13:33:29.387: INFO: Waiting for pod pod-subpath-test-projected-49bg to disappear
Dec 21 13:33:29.428: INFO: Pod pod-subpath-test-projected-49bg no longer exists
STEP: Deleting pod pod-subpath-test-projected-49bg
Dec 21 13:33:29.428: INFO: Deleting pod "pod-subpath-test-projected-49bg" in namespace "subpath-568"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:33:29.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-568" for this suite.
Dec 21 13:33:35.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:33:35.647: INFO: namespace subpath-568 deletion completed in 6.190725951s

• [SLOW TEST:36.867 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:33:35.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 21 13:33:35.722: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:33:51.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3653" for this suite.
Dec 21 13:33:57.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:33:57.454: INFO: namespace init-container-3653 deletion completed in 6.164714609s

• [SLOW TEST:21.805 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:33:57.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3337
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 13:33:57.554: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 13:34:31.761: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3337 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:34:31.761: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:34:32.233: INFO: Found all expected endpoints: [netserver-0]
Dec 21 13:34:32.245: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3337 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:34:32.245: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:34:32.706: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:34:32.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3337" for this suite.
Dec 21 13:34:56.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:34:56.905: INFO: namespace pod-network-test-3337 deletion completed in 24.185308668s

• [SLOW TEST:59.451 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:34:56.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:34:57.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2503" for this suite.
Dec 21 13:35:03.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:35:03.455: INFO: namespace kubelet-test-2503 deletion completed in 6.134730666s

• [SLOW TEST:6.548 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:35:03.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:35:03.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766" in namespace "downward-api-3634" to be "success or failure"
Dec 21 13:35:03.563: INFO: Pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766": Phase="Pending", Reason="", readiness=false. Elapsed: 5.641014ms
Dec 21 13:35:05.570: INFO: Pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013321271s
Dec 21 13:35:07.578: INFO: Pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021341012s
Dec 21 13:35:09.586: INFO: Pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028570244s
Dec 21 13:35:11.742: INFO: Pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.185300453s
STEP: Saw pod success
Dec 21 13:35:11.742: INFO: Pod "downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766" satisfied condition "success or failure"
Dec 21 13:35:12.230: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766 container client-container: 
STEP: delete the pod
Dec 21 13:35:12.399: INFO: Waiting for pod downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766 to disappear
Dec 21 13:35:12.411: INFO: Pod downwardapi-volume-2a076042-a3fe-4977-aeed-ce23a2051766 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:35:12.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3634" for this suite.
Dec 21 13:35:18.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:35:18.559: INFO: namespace downward-api-3634 deletion completed in 6.142459392s

• [SLOW TEST:15.104 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:35:18.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-d0731cee-f11a-48ba-b4e2-077fc309d185
STEP: Creating a pod to test consume configMaps
Dec 21 13:35:18.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa" in namespace "projected-6631" to be "success or failure"
Dec 21 13:35:18.731: INFO: Pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 14.229429ms
Dec 21 13:35:20.746: INFO: Pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028567184s
Dec 21 13:35:22.759: INFO: Pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041806853s
Dec 21 13:35:24.773: INFO: Pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055620987s
Dec 21 13:35:26.786: INFO: Pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0685475s
STEP: Saw pod success
Dec 21 13:35:26.786: INFO: Pod "pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa" satisfied condition "success or failure"
Dec 21 13:35:26.789: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 13:35:26.925: INFO: Waiting for pod pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa to disappear
Dec 21 13:35:26.932: INFO: Pod pod-projected-configmaps-8e5a1d9c-c995-4a9d-b601-e174101d22fa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:35:26.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6631" for this suite.
Dec 21 13:35:33.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:35:33.192: INFO: namespace projected-6631 deletion completed in 6.253441833s

• [SLOW TEST:14.633 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:35:33.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e8a702b3-f909-4734-8064-7436291f1a98
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e8a702b3-f909-4734-8064-7436291f1a98
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:35:43.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-858" for this suite.
Dec 21 13:36:07.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:36:08.038: INFO: namespace projected-858 deletion completed in 24.166082179s

• [SLOW TEST:34.845 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:36:08.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 21 13:36:16.753: INFO: Successfully updated pod "labelsupdate925dea01-d46f-4fda-84e0-cb9a240364bb"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:36:18.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4525" for this suite.
Dec 21 13:36:53.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:36:53.741: INFO: namespace downward-api-4525 deletion completed in 34.858227878s

• [SLOW TEST:45.703 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:36:53.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9f9507ab-23aa-4f43-9691-cd5a6ed029a0
STEP: Creating a pod to test consume secrets
Dec 21 13:36:53.886: INFO: Waiting up to 5m0s for pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088" in namespace "secrets-8072" to be "success or failure"
Dec 21 13:36:54.020: INFO: Pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088": Phase="Pending", Reason="", readiness=false. Elapsed: 134.091202ms
Dec 21 13:36:56.030: INFO: Pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144482868s
Dec 21 13:36:58.036: INFO: Pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149724466s
Dec 21 13:37:00.045: INFO: Pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159661477s
Dec 21 13:37:02.073: INFO: Pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.18754055s
STEP: Saw pod success
Dec 21 13:37:02.073: INFO: Pod "pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088" satisfied condition "success or failure"
Dec 21 13:37:02.079: INFO: Trying to get logs from node iruya-node pod pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088 container secret-volume-test: 
STEP: delete the pod
Dec 21 13:37:02.212: INFO: Waiting for pod pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088 to disappear
Dec 21 13:37:02.223: INFO: Pod pod-secrets-6761a8f2-38cf-4ad9-a83b-58c547ab9088 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:37:02.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8072" for this suite.
Dec 21 13:37:08.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:37:08.408: INFO: namespace secrets-8072 deletion completed in 6.176833244s

• [SLOW TEST:14.666 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:37:08.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 21 13:37:09.182: INFO: Pod name wrapped-volume-race-a70b8e0f-bc9c-4ace-b396-ff61b1998cb4: Found 0 pods out of 5
Dec 21 13:37:14.194: INFO: Pod name wrapped-volume-race-a70b8e0f-bc9c-4ace-b396-ff61b1998cb4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a70b8e0f-bc9c-4ace-b396-ff61b1998cb4 in namespace emptydir-wrapper-6150, will wait for the garbage collector to delete the pods
Dec 21 13:37:40.333: INFO: Deleting ReplicationController wrapped-volume-race-a70b8e0f-bc9c-4ace-b396-ff61b1998cb4 took: 13.509567ms
Dec 21 13:37:40.733: INFO: Terminating ReplicationController wrapped-volume-race-a70b8e0f-bc9c-4ace-b396-ff61b1998cb4 pods took: 400.414026ms
STEP: Creating RC which spawns configmap-volume pods
Dec 21 13:38:27.009: INFO: Pod name wrapped-volume-race-c4456c90-7faa-4295-97f0-a63eb4d45bc4: Found 0 pods out of 5
Dec 21 13:38:32.065: INFO: Pod name wrapped-volume-race-c4456c90-7faa-4295-97f0-a63eb4d45bc4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c4456c90-7faa-4295-97f0-a63eb4d45bc4 in namespace emptydir-wrapper-6150, will wait for the garbage collector to delete the pods
Dec 21 13:39:00.156: INFO: Deleting ReplicationController wrapped-volume-race-c4456c90-7faa-4295-97f0-a63eb4d45bc4 took: 9.346807ms
Dec 21 13:39:00.557: INFO: Terminating ReplicationController wrapped-volume-race-c4456c90-7faa-4295-97f0-a63eb4d45bc4 pods took: 400.354211ms
STEP: Creating RC which spawns configmap-volume pods
Dec 21 13:39:47.779: INFO: Pod name wrapped-volume-race-34d3cf45-a824-47d4-b7d5-58ff12df0ab3: Found 0 pods out of 5
Dec 21 13:39:52.791: INFO: Pod name wrapped-volume-race-34d3cf45-a824-47d4-b7d5-58ff12df0ab3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-34d3cf45-a824-47d4-b7d5-58ff12df0ab3 in namespace emptydir-wrapper-6150, will wait for the garbage collector to delete the pods
Dec 21 13:40:24.894: INFO: Deleting ReplicationController wrapped-volume-race-34d3cf45-a824-47d4-b7d5-58ff12df0ab3 took: 14.497159ms
Dec 21 13:40:25.295: INFO: Terminating ReplicationController wrapped-volume-race-34d3cf45-a824-47d4-b7d5-58ff12df0ab3 pods took: 400.410917ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:41:17.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6150" for this suite.
Dec 21 13:41:27.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:41:27.893: INFO: namespace emptydir-wrapper-6150 deletion completed in 10.159747811s

• [SLOW TEST:259.485 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:41:27.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:41:41.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-89" for this suite.
Dec 21 13:42:03.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:42:03.308: INFO: namespace replication-controller-89 deletion completed in 22.122568541s

• [SLOW TEST:35.415 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:42:03.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:42:03.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726" in namespace "projected-8097" to be "success or failure"
Dec 21 13:42:03.467: INFO: Pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726": Phase="Pending", Reason="", readiness=false. Elapsed: 20.290834ms
Dec 21 13:42:05.476: INFO: Pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028677693s
Dec 21 13:42:07.487: INFO: Pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039846851s
Dec 21 13:42:09.497: INFO: Pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050292333s
Dec 21 13:42:11.504: INFO: Pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057120272s
STEP: Saw pod success
Dec 21 13:42:11.504: INFO: Pod "downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726" satisfied condition "success or failure"
Dec 21 13:42:11.508: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726 container client-container: 
STEP: delete the pod
Dec 21 13:42:11.565: INFO: Waiting for pod downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726 to disappear
Dec 21 13:42:11.607: INFO: Pod downwardapi-volume-cfdcab95-6139-4efc-9e5d-4385d364a726 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:42:11.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8097" for this suite.
Dec 21 13:42:17.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:42:17.755: INFO: namespace projected-8097 deletion completed in 6.14383226s

• [SLOW TEST:14.447 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:42:17.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-r4xp
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 13:42:17.892: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-r4xp" in namespace "subpath-4080" to be "success or failure"
Dec 21 13:42:17.900: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 7.516783ms
Dec 21 13:42:19.909: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0165612s
Dec 21 13:42:21.916: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023343867s
Dec 21 13:42:23.939: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046609102s
Dec 21 13:42:25.946: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 8.053739038s
Dec 21 13:42:27.954: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 10.061993356s
Dec 21 13:42:29.960: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 12.067630748s
Dec 21 13:42:31.967: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 14.075032953s
Dec 21 13:42:33.980: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 16.087245386s
Dec 21 13:42:35.988: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 18.095812991s
Dec 21 13:42:37.998: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 20.105856812s
Dec 21 13:42:40.010: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 22.117495431s
Dec 21 13:42:42.020: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 24.127696704s
Dec 21 13:42:44.034: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Running", Reason="", readiness=true. Elapsed: 26.141165142s
Dec 21 13:42:46.060: INFO: Pod "pod-subpath-test-configmap-r4xp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.167876182s
STEP: Saw pod success
Dec 21 13:42:46.060: INFO: Pod "pod-subpath-test-configmap-r4xp" satisfied condition "success or failure"
Dec 21 13:42:46.066: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-r4xp container test-container-subpath-configmap-r4xp: 
STEP: delete the pod
Dec 21 13:42:46.142: INFO: Waiting for pod pod-subpath-test-configmap-r4xp to disappear
Dec 21 13:42:46.209: INFO: Pod pod-subpath-test-configmap-r4xp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-r4xp
Dec 21 13:42:46.209: INFO: Deleting pod "pod-subpath-test-configmap-r4xp" in namespace "subpath-4080"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:42:46.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4080" for this suite.
Dec 21 13:42:52.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:42:52.412: INFO: namespace subpath-4080 deletion completed in 6.140926812s

• [SLOW TEST:34.657 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:42:52.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-2f621c61-823a-4005-afcf-5781b6db8e36
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-2f621c61-823a-4005-afcf-5781b6db8e36
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:43:04.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9704" for this suite.
Dec 21 13:43:42.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:43:42.984: INFO: namespace configmap-9704 deletion completed in 38.173395426s

• [SLOW TEST:50.571 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:43:42.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 21 13:43:43.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6583'
Dec 21 13:43:45.783: INFO: stderr: ""
Dec 21 13:43:45.783: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 13:43:45.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6583'
Dec 21 13:43:46.401: INFO: stderr: ""
Dec 21 13:43:46.401: INFO: stdout: "update-demo-nautilus-5rbkh update-demo-nautilus-8257b "
Dec 21 13:43:46.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rbkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6583'
Dec 21 13:43:46.739: INFO: stderr: ""
Dec 21 13:43:46.739: INFO: stdout: ""
Dec 21 13:43:46.739: INFO: update-demo-nautilus-5rbkh is created but not running
Dec 21 13:43:51.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6583'
Dec 21 13:43:52.173: INFO: stderr: ""
Dec 21 13:43:52.173: INFO: stdout: "update-demo-nautilus-5rbkh update-demo-nautilus-8257b "
Dec 21 13:43:52.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rbkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6583'
Dec 21 13:43:52.682: INFO: stderr: ""
Dec 21 13:43:52.682: INFO: stdout: ""
Dec 21 13:43:52.682: INFO: update-demo-nautilus-5rbkh is created but not running
Dec 21 13:43:57.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6583'
Dec 21 13:43:57.888: INFO: stderr: ""
Dec 21 13:43:57.888: INFO: stdout: "update-demo-nautilus-5rbkh update-demo-nautilus-8257b "
Dec 21 13:43:57.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rbkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6583'
Dec 21 13:43:57.982: INFO: stderr: ""
Dec 21 13:43:57.983: INFO: stdout: "true"
Dec 21 13:43:57.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rbkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6583'
Dec 21 13:43:58.150: INFO: stderr: ""
Dec 21 13:43:58.150: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:43:58.150: INFO: validating pod update-demo-nautilus-5rbkh
Dec 21 13:43:58.185: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:43:58.185: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:43:58.185: INFO: update-demo-nautilus-5rbkh is verified up and running
Dec 21 13:43:58.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8257b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6583'
Dec 21 13:43:58.256: INFO: stderr: ""
Dec 21 13:43:58.256: INFO: stdout: "true"
Dec 21 13:43:58.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8257b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6583'
Dec 21 13:43:58.393: INFO: stderr: ""
Dec 21 13:43:58.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:43:58.393: INFO: validating pod update-demo-nautilus-8257b
Dec 21 13:43:58.404: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:43:58.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:43:58.404: INFO: update-demo-nautilus-8257b is verified up and running
STEP: using delete to clean up resources
Dec 21 13:43:58.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6583'
Dec 21 13:43:58.532: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 13:43:58.532: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 21 13:43:58.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6583'
Dec 21 13:43:58.618: INFO: stderr: "No resources found.\n"
Dec 21 13:43:58.618: INFO: stdout: ""
Dec 21 13:43:58.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6583 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 13:43:58.740: INFO: stderr: ""
Dec 21 13:43:58.740: INFO: stdout: "update-demo-nautilus-5rbkh\nupdate-demo-nautilus-8257b\n"
Dec 21 13:43:59.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6583'
Dec 21 13:43:59.367: INFO: stderr: "No resources found.\n"
Dec 21 13:43:59.367: INFO: stdout: ""
Dec 21 13:43:59.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6583 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 13:44:00.054: INFO: stderr: ""
Dec 21 13:44:00.054: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:44:00.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6583" for this suite.
Dec 21 13:44:06.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:44:06.525: INFO: namespace kubectl-6583 deletion completed in 6.464618803s

• [SLOW TEST:23.540 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:44:06.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:44:06.648: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:44:07.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1846" for this suite.
Dec 21 13:44:13.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:44:13.958: INFO: namespace custom-resource-definition-1846 deletion completed in 6.12981688s

• [SLOW TEST:7.433 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:44:13.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-08d98f39-b385-4874-8e0e-7c5ddcdbbff8
STEP: Creating a pod to test consume configMaps
Dec 21 13:44:14.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160" in namespace "configmap-5908" to be "success or failure"
Dec 21 13:44:14.098: INFO: Pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160": Phase="Pending", Reason="", readiness=false. Elapsed: 17.499588ms
Dec 21 13:44:16.108: INFO: Pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028185525s
Dec 21 13:44:18.117: INFO: Pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03655046s
Dec 21 13:44:20.135: INFO: Pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054522659s
Dec 21 13:44:22.144: INFO: Pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06355983s
STEP: Saw pod success
Dec 21 13:44:22.144: INFO: Pod "pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160" satisfied condition "success or failure"
Dec 21 13:44:22.149: INFO: Trying to get logs from node iruya-node pod pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160 container configmap-volume-test: 
STEP: delete the pod
Dec 21 13:44:22.337: INFO: Waiting for pod pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160 to disappear
Dec 21 13:44:22.364: INFO: Pod pod-configmaps-104a391e-3a71-4c8e-a3d4-5f794990b160 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:44:22.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5908" for this suite.
Dec 21 13:44:28.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:44:28.524: INFO: namespace configmap-5908 deletion completed in 6.151759669s

• [SLOW TEST:14.565 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:44:28.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:44:34.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8522" for this suite.
Dec 21 13:44:40.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:44:40.430: INFO: namespace watch-8522 deletion completed in 6.250472254s

• [SLOW TEST:11.905 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:44:40.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6a4358fb-5375-4270-941e-0884270605ac
STEP: Creating a pod to test consume configMaps
Dec 21 13:44:40.584: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b" in namespace "projected-4962" to be "success or failure"
Dec 21 13:44:40.597: INFO: Pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.638617ms
Dec 21 13:44:42.612: INFO: Pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027495191s
Dec 21 13:44:44.625: INFO: Pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040811723s
Dec 21 13:44:46.650: INFO: Pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06541957s
Dec 21 13:44:48.665: INFO: Pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08107202s
STEP: Saw pod success
Dec 21 13:44:48.666: INFO: Pod "pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b" satisfied condition "success or failure"
Dec 21 13:44:48.681: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 13:44:48.871: INFO: Waiting for pod pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b to disappear
Dec 21 13:44:48.892: INFO: Pod pod-projected-configmaps-f48e58dc-bd47-42bc-bd90-c489b1440e4b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:44:48.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4962" for this suite.
Dec 21 13:44:54.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:44:55.035: INFO: namespace projected-4962 deletion completed in 6.133231859s

• [SLOW TEST:14.605 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:44:55.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 21 13:44:55.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3795,SelfLink:/api/v1/namespaces/watch-3795/configmaps/e2e-watch-test-resource-version,UID:403cf8b0-51d7-4b59-a04f-063059d2f916,ResourceVersion:17517975,Generation:0,CreationTimestamp:2019-12-21 13:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 13:44:55.198: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3795,SelfLink:/api/v1/namespaces/watch-3795/configmaps/e2e-watch-test-resource-version,UID:403cf8b0-51d7-4b59-a04f-063059d2f916,ResourceVersion:17517976,Generation:0,CreationTimestamp:2019-12-21 13:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:44:55.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3795" for this suite.
Dec 21 13:45:01.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:45:01.367: INFO: namespace watch-3795 deletion completed in 6.152740412s

• [SLOW TEST:6.331 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:45:01.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:45:01.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca" in namespace "projected-3101" to be "success or failure"
Dec 21 13:45:01.480: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca": Phase="Pending", Reason="", readiness=false. Elapsed: 23.08452ms
Dec 21 13:45:03.487: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030144854s
Dec 21 13:45:05.496: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038713439s
Dec 21 13:45:07.505: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048614536s
Dec 21 13:45:09.532: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074985002s
Dec 21 13:45:11.541: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084299038s
STEP: Saw pod success
Dec 21 13:45:11.541: INFO: Pod "downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca" satisfied condition "success or failure"
Dec 21 13:45:11.545: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca container client-container: 
STEP: delete the pod
Dec 21 13:45:11.614: INFO: Waiting for pod downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca to disappear
Dec 21 13:45:11.620: INFO: Pod downwardapi-volume-2c071432-30a3-4614-8e0a-eee8de7358ca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:45:11.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3101" for this suite.
Dec 21 13:45:17.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:45:17.785: INFO: namespace projected-3101 deletion completed in 6.155469557s

• [SLOW TEST:16.418 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:45:17.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 21 13:45:17.913: INFO: Waiting up to 5m0s for pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba" in namespace "emptydir-3421" to be "success or failure"
Dec 21 13:45:17.918: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299311ms
Dec 21 13:45:19.925: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012130566s
Dec 21 13:45:21.950: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036778312s
Dec 21 13:45:23.993: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079963082s
Dec 21 13:45:26.036: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122302382s
Dec 21 13:45:28.058: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144479589s
STEP: Saw pod success
Dec 21 13:45:28.058: INFO: Pod "pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba" satisfied condition "success or failure"
Dec 21 13:45:28.068: INFO: Trying to get logs from node iruya-node pod pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba container test-container: 
STEP: delete the pod
Dec 21 13:45:28.181: INFO: Waiting for pod pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba to disappear
Dec 21 13:45:28.217: INFO: Pod pod-44b4d92c-f49b-421f-92a2-56c31f79b2ba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:45:28.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3421" for this suite.
Dec 21 13:45:34.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:45:34.501: INFO: namespace emptydir-3421 deletion completed in 6.276738159s

• [SLOW TEST:16.716 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:45:34.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:45:34.565: INFO: Creating deployment "test-recreate-deployment"
Dec 21 13:45:34.645: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 21 13:45:35.415: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 21 13:45:37.434: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 21 13:45:37.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532734, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:45:39.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532734, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:45:41.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532734, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:45:43.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712532734, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:45:45.448: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 21 13:45:45.470: INFO: Updating deployment test-recreate-deployment
Dec 21 13:45:45.470: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 21 13:45:45.779: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-3542,SelfLink:/apis/apps/v1/namespaces/deployment-3542/deployments/test-recreate-deployment,UID:802b5ff0-1c51-4b34-b9b2-59173d6c3b64,ResourceVersion:17518132,Generation:2,CreationTimestamp:2019-12-21 13:45:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-21 13:45:45 +0000 UTC 2019-12-21 13:45:45 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-21 13:45:45 +0000 UTC 2019-12-21 13:45:34 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 21 13:45:45.864: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-3542,SelfLink:/apis/apps/v1/namespaces/deployment-3542/replicasets/test-recreate-deployment-5c8c9cc69d,UID:e5ac4bb2-ef0e-467b-a112-4b32ce3f96dc,ResourceVersion:17518131,Generation:1,CreationTimestamp:2019-12-21 13:45:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 802b5ff0-1c51-4b34-b9b2-59173d6c3b64 0xc0033e3a77 0xc0033e3a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:45:45.864: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 21 13:45:45.864: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-3542,SelfLink:/apis/apps/v1/namespaces/deployment-3542/replicasets/test-recreate-deployment-6df85df6b9,UID:bf576145-0ae4-45bc-9367-2c3b4f15636d,ResourceVersion:17518121,Generation:2,CreationTimestamp:2019-12-21 13:45:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 802b5ff0-1c51-4b34-b9b2-59173d6c3b64 0xc0033e3b47 0xc0033e3b48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:45:45.891: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kbgmt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kbgmt,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-3542,SelfLink:/api/v1/namespaces/deployment-3542/pods/test-recreate-deployment-5c8c9cc69d-kbgmt,UID:c26e2a69-8de8-4e59-a5aa-31ccf8ffd83f,ResourceVersion:17518134,Generation:0,CreationTimestamp:2019-12-21 13:45:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d e5ac4bb2-ef0e-467b-a112-4b32ce3f96dc 0xc002910457 0xc002910458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dm242 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dm242,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dm242 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029104d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029104f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:45:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:45:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:45:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-21 13:45:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:45:45.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3542" for this suite.
Dec 21 13:45:53.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:45:54.060: INFO: namespace deployment-3542 deletion completed in 8.157940828s

• [SLOW TEST:19.558 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:45:54.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:45:54.138: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 21 13:45:59.148: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 21 13:46:01.165: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 21 13:46:01.364: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-9782,SelfLink:/apis/apps/v1/namespaces/deployment-9782/deployments/test-cleanup-deployment,UID:439523d8-a05e-473d-aa32-8143295bf635,ResourceVersion:17518195,Generation:1,CreationTimestamp:2019-12-21 13:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 21 13:46:01.403: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-9782,SelfLink:/apis/apps/v1/namespaces/deployment-9782/replicasets/test-cleanup-deployment-55bbcbc84c,UID:1c3f6d7d-db93-4fe7-aa30-f2c343c13f87,ResourceVersion:17518197,Generation:1,CreationTimestamp:2019-12-21 13:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 439523d8-a05e-473d-aa32-8143295bf635 0xc002878df7 0xc002878df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:46:01.403: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 21 13:46:01.404: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-9782,SelfLink:/apis/apps/v1/namespaces/deployment-9782/replicasets/test-cleanup-controller,UID:0f9c77db-fe5e-49ba-8715-5cc9c6fd5ad7,ResourceVersion:17518196,Generation:1,CreationTimestamp:2019-12-21 13:45:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 439523d8-a05e-473d-aa32-8143295bf635 0xc002878d27 0xc002878d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 21 13:46:01.417: INFO: Pod "test-cleanup-controller-dp65d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dp65d,GenerateName:test-cleanup-controller-,Namespace:deployment-9782,SelfLink:/api/v1/namespaces/deployment-9782/pods/test-cleanup-controller-dp65d,UID:96e72e21-e770-4152-8534-65f4b9af7c30,ResourceVersion:17518193,Generation:0,CreationTimestamp:2019-12-21 13:45:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 0f9c77db-fe5e-49ba-8715-5cc9c6fd5ad7 0xc000dc8117 0xc000dc8118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qn4mt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qn4mt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qn4mt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000dc8190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000dc81b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:45:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:46:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:46:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:45:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-21 13:45:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:46:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dac18f05d2961fc8bda76bcd0ac173fb7a2757627fceeadada55e185110f3a3d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:46:01.417: INFO: Pod "test-cleanup-deployment-55bbcbc84c-jxgb2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-jxgb2,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-9782,SelfLink:/api/v1/namespaces/deployment-9782/pods/test-cleanup-deployment-55bbcbc84c-jxgb2,UID:8b4595c9-aad8-4eba-aeb0-ccbe0a30ff88,ResourceVersion:17518201,Generation:0,CreationTimestamp:2019-12-21 13:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 1c3f6d7d-db93-4fe7-aa30-f2c343c13f87 0xc000dc8297 0xc000dc8298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qn4mt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qn4mt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qn4mt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000dc8320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000dc8340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:46:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:46:01.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9782" for this suite.
Dec 21 13:46:09.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:46:09.664: INFO: namespace deployment-9782 deletion completed in 8.160496566s

• [SLOW TEST:15.604 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:46:09.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-04456b65-ec58-4c65-a516-7fdb359a7bb7
STEP: Creating configMap with name cm-test-opt-upd-1ff7c40d-ee73-4d25-8637-89e0f70b49f9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-04456b65-ec58-4c65-a516-7fdb359a7bb7
STEP: Updating configmap cm-test-opt-upd-1ff7c40d-ee73-4d25-8637-89e0f70b49f9
STEP: Creating configMap with name cm-test-opt-create-f4e39343-9618-4332-bfdd-4700a8f75235
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:47:44.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7571" for this suite.
Dec 21 13:48:06.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:48:06.374: INFO: namespace configmap-7571 deletion completed in 22.333756682s

• [SLOW TEST:116.709 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:48:06.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-470c9774-8ada-4bed-b712-8810878e499d in namespace container-probe-5643
Dec 21 13:48:16.668: INFO: Started pod liveness-470c9774-8ada-4bed-b712-8810878e499d in namespace container-probe-5643
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 13:48:16.670: INFO: Initial restart count of pod liveness-470c9774-8ada-4bed-b712-8810878e499d is 0
Dec 21 13:48:38.774: INFO: Restart count of pod container-probe-5643/liveness-470c9774-8ada-4bed-b712-8810878e499d is now 1 (22.103534766s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:48:38.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5643" for this suite.
Dec 21 13:48:44.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:48:44.944: INFO: namespace container-probe-5643 deletion completed in 6.127637852s

• [SLOW TEST:38.570 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:48:44.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:48:45.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8" in namespace "downward-api-1710" to be "success or failure"
Dec 21 13:48:45.081: INFO: Pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.619383ms
Dec 21 13:48:47.137: INFO: Pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065714184s
Dec 21 13:48:49.144: INFO: Pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072254988s
Dec 21 13:48:51.153: INFO: Pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081159512s
Dec 21 13:48:53.426: INFO: Pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.353940755s
STEP: Saw pod success
Dec 21 13:48:53.426: INFO: Pod "downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8" satisfied condition "success or failure"
Dec 21 13:48:53.445: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8 container client-container: 
STEP: delete the pod
Dec 21 13:48:53.666: INFO: Waiting for pod downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8 to disappear
Dec 21 13:48:53.731: INFO: Pod downwardapi-volume-1edfd367-40fd-453a-91ae-cf18bb3937b8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:48:53.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1710" for this suite.
Dec 21 13:48:59.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:48:59.981: INFO: namespace downward-api-1710 deletion completed in 6.242028563s

• [SLOW TEST:15.037 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:48:59.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-rzr9
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 13:49:00.127: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rzr9" in namespace "subpath-616" to be "success or failure"
Dec 21 13:49:00.146: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.578429ms
Dec 21 13:49:02.157: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029811651s
Dec 21 13:49:04.169: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04228213s
Dec 21 13:49:06.179: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052320203s
Dec 21 13:49:08.187: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 8.060361782s
Dec 21 13:49:10.197: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 10.070393457s
Dec 21 13:49:12.203: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 12.076012086s
Dec 21 13:49:14.209: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 14.082381228s
Dec 21 13:49:16.217: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 16.089852436s
Dec 21 13:49:18.224: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 18.096852565s
Dec 21 13:49:20.233: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 20.10641367s
Dec 21 13:49:22.239: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 22.112118718s
Dec 21 13:49:24.249: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 24.121886331s
Dec 21 13:49:26.258: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 26.131361979s
Dec 21 13:49:28.269: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Running", Reason="", readiness=true. Elapsed: 28.141737405s
Dec 21 13:49:30.278: INFO: Pod "pod-subpath-test-secret-rzr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.150957252s
STEP: Saw pod success
Dec 21 13:49:30.278: INFO: Pod "pod-subpath-test-secret-rzr9" satisfied condition "success or failure"
Dec 21 13:49:30.285: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-rzr9 container test-container-subpath-secret-rzr9: 
STEP: delete the pod
Dec 21 13:49:30.441: INFO: Waiting for pod pod-subpath-test-secret-rzr9 to disappear
Dec 21 13:49:30.447: INFO: Pod pod-subpath-test-secret-rzr9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-rzr9
Dec 21 13:49:30.447: INFO: Deleting pod "pod-subpath-test-secret-rzr9" in namespace "subpath-616"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:49:30.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-616" for this suite.
Dec 21 13:49:36.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:49:36.634: INFO: namespace subpath-616 deletion completed in 6.177848901s

• [SLOW TEST:36.652 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:49:36.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:49:36.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7" in namespace "downward-api-6495" to be "success or failure"
Dec 21 13:49:36.704: INFO: Pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309479ms
Dec 21 13:49:38.718: INFO: Pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016472823s
Dec 21 13:49:40.729: INFO: Pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028047081s
Dec 21 13:49:42.735: INFO: Pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034098723s
Dec 21 13:49:44.760: INFO: Pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058879033s
STEP: Saw pod success
Dec 21 13:49:44.760: INFO: Pod "downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7" satisfied condition "success or failure"
Dec 21 13:49:44.777: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7 container client-container: 
STEP: delete the pod
Dec 21 13:49:45.342: INFO: Waiting for pod downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7 to disappear
Dec 21 13:49:45.364: INFO: Pod downwardapi-volume-53860f17-8450-4311-b303-b8b431c2f2d7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:49:45.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6495" for this suite.
Dec 21 13:49:51.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:49:51.594: INFO: namespace downward-api-6495 deletion completed in 6.223808941s

• [SLOW TEST:14.960 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:49:51.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 21 13:49:51.693: INFO: Waiting up to 5m0s for pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2" in namespace "emptydir-9207" to be "success or failure"
Dec 21 13:49:51.698: INFO: Pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.928012ms
Dec 21 13:49:53.708: INFO: Pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015894181s
Dec 21 13:49:55.722: INFO: Pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029694592s
Dec 21 13:49:57.742: INFO: Pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049527954s
Dec 21 13:49:59.800: INFO: Pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10780734s
STEP: Saw pod success
Dec 21 13:49:59.800: INFO: Pod "pod-720ae9b4-78d2-4842-b116-b05e73cc00a2" satisfied condition "success or failure"
Dec 21 13:49:59.807: INFO: Trying to get logs from node iruya-node pod pod-720ae9b4-78d2-4842-b116-b05e73cc00a2 container test-container: 
STEP: delete the pod
Dec 21 13:49:59.893: INFO: Waiting for pod pod-720ae9b4-78d2-4842-b116-b05e73cc00a2 to disappear
Dec 21 13:49:59.950: INFO: Pod pod-720ae9b4-78d2-4842-b116-b05e73cc00a2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:49:59.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9207" for this suite.
Dec 21 13:50:06.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:50:06.211: INFO: namespace emptydir-9207 deletion completed in 6.251758242s

• [SLOW TEST:14.616 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:50:06.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 21 13:50:06.516: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 21 13:50:07.551: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 21 13:50:09.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:50:11.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:50:13.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:50:15.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533007, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:50:18.732: INFO: Waited 901.940691ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:50:19.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6993" for this suite.
Dec 21 13:50:25.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:50:25.725: INFO: namespace aggregator-6993 deletion completed in 6.139261114s

• [SLOW TEST:19.514 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:50:25.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-4dw9f in namespace proxy-7757
I1221 13:50:25.907122       9 runners.go:180] Created replication controller with name: proxy-service-4dw9f, namespace: proxy-7757, replica count: 1
I1221 13:50:26.957871       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:27.958152       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:28.958416       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:29.958696       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:30.958984       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:31.959488       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:32.959844       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 13:50:33.960318       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1221 13:50:34.960701       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1221 13:50:35.960952       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1221 13:50:36.961191       9 runners.go:180] proxy-service-4dw9f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 21 13:50:36.966: INFO: setup took 11.179048489s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 21 13:50:36.998: INFO: (0) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 32.032383ms)
Dec 21 13:50:37.003: INFO: (0) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 36.654005ms)
Dec 21 13:50:37.003: INFO: (0) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 36.939847ms)
Dec 21 13:50:37.006: INFO: (0) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 40.169226ms)
Dec 21 13:50:37.006: INFO: (0) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 40.190338ms)
Dec 21 13:50:37.006: INFO: (0) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 40.148051ms)
Dec 21 13:50:37.007: INFO: (0) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 40.029261ms)
Dec 21 13:50:37.007: INFO: (0) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 40.508811ms)
Dec 21 13:50:37.007: INFO: (0) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 40.294197ms)
Dec 21 13:50:37.007: INFO: (0) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 40.444155ms)
Dec 21 13:50:37.007: INFO: (0) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 40.781642ms)
Dec 21 13:50:37.017: INFO: (0) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 50.160051ms)
Dec 21 13:50:37.017: INFO: (0) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 50.01096ms)
Dec 21 13:50:37.017: INFO: (0) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 50.888715ms)
Dec 21 13:50:37.018: INFO: (0) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test<... (200; 16.552929ms)
Dec 21 13:50:37.038: INFO: (1) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 18.732745ms)
Dec 21 13:50:37.039: INFO: (1) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 19.407354ms)
Dec 21 13:50:37.039: INFO: (1) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 19.322761ms)
Dec 21 13:50:37.039: INFO: (1) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 19.820008ms)
Dec 21 13:50:37.040: INFO: (1) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 20.170672ms)
Dec 21 13:50:37.040: INFO: (1) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 20.057903ms)
Dec 21 13:50:37.040: INFO: (1) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 20.135779ms)
Dec 21 13:50:37.040: INFO: (1) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 20.045749ms)
Dec 21 13:50:37.040: INFO: (1) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 20.077001ms)
Dec 21 13:50:37.040: INFO: (1) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 20.366055ms)
Dec 21 13:50:37.041: INFO: (1) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 21.385327ms)
Dec 21 13:50:37.042: INFO: (1) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 21.95549ms)
Dec 21 13:50:37.050: INFO: (2) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test (200; 7.916631ms)
Dec 21 13:50:37.051: INFO: (2) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 8.932955ms)
Dec 21 13:50:37.052: INFO: (2) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 9.838738ms)
Dec 21 13:50:37.052: INFO: (2) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 9.870549ms)
Dec 21 13:50:37.052: INFO: (2) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 10.016437ms)
Dec 21 13:50:37.052: INFO: (2) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 9.971791ms)
Dec 21 13:50:37.052: INFO: (2) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 10.076392ms)
Dec 21 13:50:37.052: INFO: (2) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 10.291214ms)
Dec 21 13:50:37.053: INFO: (2) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 11.184454ms)
Dec 21 13:50:37.053: INFO: (2) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 11.436061ms)
Dec 21 13:50:37.057: INFO: (2) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 15.774482ms)
Dec 21 13:50:37.058: INFO: (2) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 15.88245ms)
Dec 21 13:50:37.058: INFO: (2) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 16.060194ms)
Dec 21 13:50:37.058: INFO: (2) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 16.401227ms)
Dec 21 13:50:37.058: INFO: (2) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 16.561537ms)
Dec 21 13:50:37.064: INFO: (3) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: ... (200; 5.192686ms)
Dec 21 13:50:37.067: INFO: (3) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 8.226596ms)
Dec 21 13:50:37.067: INFO: (3) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 8.123228ms)
Dec 21 13:50:37.067: INFO: (3) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 8.948048ms)
Dec 21 13:50:37.067: INFO: (3) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 8.932992ms)
Dec 21 13:50:37.067: INFO: (3) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 8.880115ms)
Dec 21 13:50:37.067: INFO: (3) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 8.949809ms)
Dec 21 13:50:37.068: INFO: (3) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 9.14929ms)
Dec 21 13:50:37.068: INFO: (3) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 9.300731ms)
Dec 21 13:50:37.068: INFO: (3) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 9.494709ms)
Dec 21 13:50:37.068: INFO: (3) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 10.032392ms)
Dec 21 13:50:37.069: INFO: (3) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 10.530388ms)
Dec 21 13:50:37.070: INFO: (3) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 11.292423ms)
Dec 21 13:50:37.070: INFO: (3) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 11.322538ms)
Dec 21 13:50:37.071: INFO: (3) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 12.462652ms)
Dec 21 13:50:37.080: INFO: (4) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 8.596367ms)
Dec 21 13:50:37.080: INFO: (4) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 8.518282ms)
Dec 21 13:50:37.080: INFO: (4) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 8.729744ms)
Dec 21 13:50:37.080: INFO: (4) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 8.863695ms)
Dec 21 13:50:37.080: INFO: (4) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 9.159401ms)
Dec 21 13:50:37.081: INFO: (4) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 10.157052ms)
Dec 21 13:50:37.082: INFO: (4) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 11.040892ms)
Dec 21 13:50:37.082: INFO: (4) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 11.245563ms)
Dec 21 13:50:37.082: INFO: (4) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test<... (200; 11.250221ms)
Dec 21 13:50:37.083: INFO: (4) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 12.180134ms)
Dec 21 13:50:37.085: INFO: (4) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 13.776639ms)
Dec 21 13:50:37.085: INFO: (4) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 14.065532ms)
Dec 21 13:50:37.086: INFO: (4) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 14.451811ms)
Dec 21 13:50:37.087: INFO: (4) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 15.719184ms)
Dec 21 13:50:37.088: INFO: (4) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 16.660147ms)
Dec 21 13:50:37.091: INFO: (5) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 3.500235ms)
Dec 21 13:50:37.091: INFO: (5) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 3.62893ms)
Dec 21 13:50:37.092: INFO: (5) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 4.064223ms)
Dec 21 13:50:37.092: INFO: (5) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 4.195593ms)
Dec 21 13:50:37.093: INFO: (5) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 4.917063ms)
Dec 21 13:50:37.095: INFO: (5) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 6.828907ms)
Dec 21 13:50:37.099: INFO: (5) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 11.525352ms)
Dec 21 13:50:37.100: INFO: (5) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 11.496082ms)
Dec 21 13:50:37.100: INFO: (5) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 11.503342ms)
Dec 21 13:50:37.100: INFO: (5) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 11.628393ms)
Dec 21 13:50:37.102: INFO: (5) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test (200; 18.156954ms)
Dec 21 13:50:37.133: INFO: (6) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 18.977224ms)
Dec 21 13:50:37.134: INFO: (6) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 19.668797ms)
Dec 21 13:50:37.134: INFO: (6) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 19.785217ms)
Dec 21 13:50:37.134: INFO: (6) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 19.734099ms)
Dec 21 13:50:37.135: INFO: (6) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 20.255047ms)
Dec 21 13:50:37.135: INFO: (6) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: ... (200; 19.395515ms)
Dec 21 13:50:37.165: INFO: (7) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 25.400349ms)
Dec 21 13:50:37.165: INFO: (7) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 25.696553ms)
Dec 21 13:50:37.165: INFO: (7) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 25.704817ms)
Dec 21 13:50:37.166: INFO: (7) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 26.347567ms)
Dec 21 13:50:37.166: INFO: (7) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 26.534683ms)
Dec 21 13:50:37.166: INFO: (7) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 26.522944ms)
Dec 21 13:50:37.166: INFO: (7) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 26.583845ms)
Dec 21 13:50:37.166: INFO: (7) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: ... (200; 8.152064ms)
Dec 21 13:50:37.189: INFO: (8) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 10.001451ms)
Dec 21 13:50:37.189: INFO: (8) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 10.061225ms)
Dec 21 13:50:37.189: INFO: (8) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 10.439448ms)
Dec 21 13:50:37.190: INFO: (8) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 10.965728ms)
Dec 21 13:50:37.190: INFO: (8) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test (200; 10.459667ms)
Dec 21 13:50:37.214: INFO: (9) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 10.925653ms)
Dec 21 13:50:37.214: INFO: (9) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test<... (200; 10.690569ms)
Dec 21 13:50:37.214: INFO: (9) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 10.845406ms)
Dec 21 13:50:37.214: INFO: (9) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 10.811151ms)
Dec 21 13:50:37.214: INFO: (9) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 10.962158ms)
Dec 21 13:50:37.214: INFO: (9) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 10.865193ms)
Dec 21 13:50:37.215: INFO: (9) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 11.911428ms)
Dec 21 13:50:37.215: INFO: (9) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 11.713513ms)
Dec 21 13:50:37.215: INFO: (9) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 11.753154ms)
Dec 21 13:50:37.215: INFO: (9) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 11.742402ms)
Dec 21 13:50:37.215: INFO: (9) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 11.861977ms)
Dec 21 13:50:37.216: INFO: (9) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 13.400808ms)
Dec 21 13:50:37.226: INFO: (10) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 9.355886ms)
Dec 21 13:50:37.226: INFO: (10) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 9.322476ms)
Dec 21 13:50:37.229: INFO: (10) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 12.426211ms)
Dec 21 13:50:37.229: INFO: (10) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 12.631649ms)
Dec 21 13:50:37.229: INFO: (10) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test (200; 12.845263ms)
Dec 21 13:50:37.234: INFO: (10) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 17.116057ms)
Dec 21 13:50:37.234: INFO: (10) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 17.257606ms)
Dec 21 13:50:37.234: INFO: (10) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 17.22905ms)
Dec 21 13:50:37.234: INFO: (10) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 17.205211ms)
Dec 21 13:50:37.234: INFO: (10) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 17.256343ms)
Dec 21 13:50:37.234: INFO: (10) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 17.22871ms)
Dec 21 13:50:37.235: INFO: (10) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 19.000822ms)
Dec 21 13:50:37.236: INFO: (10) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 19.554169ms)
Dec 21 13:50:37.237: INFO: (10) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 20.245808ms)
Dec 21 13:50:37.237: INFO: (10) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 20.238837ms)
Dec 21 13:50:37.248: INFO: (11) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 11.279729ms)
Dec 21 13:50:37.249: INFO: (11) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 12.352313ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 12.404707ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 12.194256ms)
Dec 21 13:50:37.249: INFO: (11) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 12.610899ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 12.439315ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 12.475754ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 12.417784ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 12.577618ms)
Dec 21 13:50:37.250: INFO: (11) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test (200; 8.350188ms)
Dec 21 13:50:37.261: INFO: (12) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 9.287586ms)
Dec 21 13:50:37.262: INFO: (12) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 9.527425ms)
Dec 21 13:50:37.262: INFO: (12) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 9.775553ms)
Dec 21 13:50:37.262: INFO: (12) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 9.914993ms)
Dec 21 13:50:37.262: INFO: (12) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 9.964249ms)
Dec 21 13:50:37.264: INFO: (12) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: ... (200; 13.833726ms)
Dec 21 13:50:37.288: INFO: (13) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 14.369145ms)
Dec 21 13:50:37.288: INFO: (13) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 14.392442ms)
Dec 21 13:50:37.288: INFO: (13) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 14.315686ms)
Dec 21 13:50:37.288: INFO: (13) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 14.617371ms)
Dec 21 13:50:37.288: INFO: (13) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 14.594428ms)
Dec 21 13:50:37.288: INFO: (13) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test<... (200; 13.066391ms)
Dec 21 13:50:37.303: INFO: (14) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 13.155497ms)
Dec 21 13:50:37.303: INFO: (14) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 13.313496ms)
Dec 21 13:50:37.303: INFO: (14) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test (200; 13.658155ms)
Dec 21 13:50:37.303: INFO: (14) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 14.007535ms)
Dec 21 13:50:37.304: INFO: (14) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 14.081376ms)
Dec 21 13:50:37.304: INFO: (14) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 14.684023ms)
Dec 21 13:50:37.304: INFO: (14) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 14.823356ms)
Dec 21 13:50:37.304: INFO: (14) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 14.796698ms)
Dec 21 13:50:37.304: INFO: (14) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 14.938217ms)
Dec 21 13:50:37.308: INFO: (14) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 18.472283ms)
Dec 21 13:50:37.309: INFO: (14) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 19.018358ms)
Dec 21 13:50:37.309: INFO: (14) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 19.013949ms)
Dec 21 13:50:37.314: INFO: (15) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 5.296464ms)
Dec 21 13:50:37.314: INFO: (15) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 5.71451ms)
Dec 21 13:50:37.315: INFO: (15) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test<... (200; 12.950732ms)
Dec 21 13:50:37.322: INFO: (15) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 13.476184ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 14.225378ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 14.409114ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 14.306042ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 14.341041ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 14.379138ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 14.493383ms)
Dec 21 13:50:37.323: INFO: (15) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 14.5971ms)
Dec 21 13:50:37.324: INFO: (15) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 15.225024ms)
Dec 21 13:50:37.324: INFO: (15) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 15.376842ms)
Dec 21 13:50:37.324: INFO: (15) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 15.388926ms)
Dec 21 13:50:37.324: INFO: (15) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 15.59499ms)
Dec 21 13:50:37.334: INFO: (16) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 9.397083ms)
Dec 21 13:50:37.337: INFO: (16) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 12.411056ms)
Dec 21 13:50:37.337: INFO: (16) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 12.477279ms)
Dec 21 13:50:37.337: INFO: (16) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 12.749325ms)
Dec 21 13:50:37.337: INFO: (16) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 12.643139ms)
Dec 21 13:50:37.337: INFO: (16) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 12.591871ms)
Dec 21 13:50:37.338: INFO: (16) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 13.50723ms)
Dec 21 13:50:37.338: INFO: (16) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 13.710604ms)
Dec 21 13:50:37.338: INFO: (16) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: test<... (200; 13.719447ms)
Dec 21 13:50:37.339: INFO: (16) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 14.14565ms)
Dec 21 13:50:37.339: INFO: (16) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 14.37886ms)
Dec 21 13:50:37.339: INFO: (16) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 14.185468ms)
Dec 21 13:50:37.359: INFO: (17) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 20.227439ms)
Dec 21 13:50:37.360: INFO: (17) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 20.686285ms)
Dec 21 13:50:37.360: INFO: (17) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 20.945304ms)
Dec 21 13:50:37.362: INFO: (17) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 22.924511ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 23.450591ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 23.454859ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: ... (200; 23.431193ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 23.564136ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 23.788438ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 23.582241ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 23.869252ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 24.364058ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 24.328537ms)
Dec 21 13:50:37.363: INFO: (17) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 24.293229ms)
Dec 21 13:50:37.374: INFO: (18) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 9.26013ms)
Dec 21 13:50:37.374: INFO: (18) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 10.077325ms)
Dec 21 13:50:37.376: INFO: (18) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 10.984753ms)
Dec 21 13:50:37.376: INFO: (18) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname2/proxy/: bar (200; 11.865721ms)
Dec 21 13:50:37.380: INFO: (18) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 16.392523ms)
Dec 21 13:50:37.380: INFO: (18) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 16.624936ms)
Dec 21 13:50:37.380: INFO: (18) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 15.987548ms)
Dec 21 13:50:37.380: INFO: (18) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: ... (200; 16.42638ms)
Dec 21 13:50:37.381: INFO: (18) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 16.648825ms)
Dec 21 13:50:37.381: INFO: (18) /api/v1/namespaces/proxy-7757/services/http:proxy-service-4dw9f:portname1/proxy/: foo (200; 16.736295ms)
Dec 21 13:50:37.381: INFO: (18) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 16.882931ms)
Dec 21 13:50:37.381: INFO: (18) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 16.645257ms)
Dec 21 13:50:37.381: INFO: (18) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 16.532734ms)
Dec 21 13:50:37.382: INFO: (18) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname2/proxy/: tls qux (200; 18.384852ms)
Dec 21 13:50:37.382: INFO: (18) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:462/proxy/: tls qux (200; 18.261284ms)
Dec 21 13:50:37.390: INFO: (19) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 7.704212ms)
Dec 21 13:50:37.392: INFO: (19) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 9.924238ms)
Dec 21 13:50:37.392: INFO: (19) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname1/proxy/: foo (200; 10.067289ms)
Dec 21 13:50:37.392: INFO: (19) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6/proxy/: test (200; 10.202721ms)
Dec 21 13:50:37.393: INFO: (19) /api/v1/namespaces/proxy-7757/services/https:proxy-service-4dw9f:tlsportname1/proxy/: tls baz (200; 10.790891ms)
Dec 21 13:50:37.393: INFO: (19) /api/v1/namespaces/proxy-7757/services/proxy-service-4dw9f:portname2/proxy/: bar (200; 10.780236ms)
Dec 21 13:50:37.393: INFO: (19) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:1080/proxy/: ... (200; 11.117651ms)
Dec 21 13:50:37.394: INFO: (19) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:1080/proxy/: test<... (200; 11.440325ms)
Dec 21 13:50:37.394: INFO: (19) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:460/proxy/: tls baz (200; 11.787922ms)
Dec 21 13:50:37.394: INFO: (19) /api/v1/namespaces/proxy-7757/pods/http:proxy-service-4dw9f-2lph6:160/proxy/: foo (200; 11.604989ms)
Dec 21 13:50:37.394: INFO: (19) /api/v1/namespaces/proxy-7757/pods/proxy-service-4dw9f-2lph6:162/proxy/: bar (200; 11.832043ms)
Dec 21 13:50:37.394: INFO: (19) /api/v1/namespaces/proxy-7757/pods/https:proxy-service-4dw9f-2lph6:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5817, will wait for the garbage collector to delete the pods
Dec 21 13:50:59.524: INFO: Deleting Job.batch foo took: 11.389125ms
Dec 21 13:50:59.625: INFO: Terminating Job.batch foo pods took: 100.443136ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:51:35.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5817" for this suite.
Dec 21 13:51:41.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:51:41.157: INFO: namespace job-5817 deletion completed in 6.122520234s

• [SLOW TEST:51.975 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:51:41.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9507.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9507.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9507.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9507.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9507.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9507.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 13:51:53.428: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84: the server could not find the requested resource (get pods dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84)
Dec 21 13:51:53.433: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84: the server could not find the requested resource (get pods dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84)
Dec 21 13:51:53.437: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9507.svc.cluster.local from pod dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84: the server could not find the requested resource (get pods dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84)
Dec 21 13:51:53.443: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84: the server could not find the requested resource (get pods dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84)
Dec 21 13:51:53.448: INFO: Unable to read jessie_udp@PodARecord from pod dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84: the server could not find the requested resource (get pods dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84)
Dec 21 13:51:53.452: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84: the server could not find the requested resource (get pods dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84)
Dec 21 13:51:53.452: INFO: Lookups using dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9507.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 21 13:51:58.529: INFO: DNS probes using dns-9507/dns-test-bd27e68a-e20b-46f5-b640-ab7b7fcdbe84 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:51:58.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9507" for this suite.
Dec 21 13:52:04.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:52:04.779: INFO: namespace dns-9507 deletion completed in 6.189935397s

• [SLOW TEST:23.622 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:52:04.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:52:04.941: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 21 13:52:04.964: INFO: Number of nodes with available pods: 0
Dec 21 13:52:04.964: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:07.144: INFO: Number of nodes with available pods: 0
Dec 21 13:52:07.144: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:07.978: INFO: Number of nodes with available pods: 0
Dec 21 13:52:07.978: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:08.979: INFO: Number of nodes with available pods: 0
Dec 21 13:52:08.979: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:09.987: INFO: Number of nodes with available pods: 0
Dec 21 13:52:09.987: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:11.765: INFO: Number of nodes with available pods: 0
Dec 21 13:52:11.765: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:12.782: INFO: Number of nodes with available pods: 0
Dec 21 13:52:12.782: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:12.987: INFO: Number of nodes with available pods: 0
Dec 21 13:52:12.987: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:14.311: INFO: Number of nodes with available pods: 0
Dec 21 13:52:14.311: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:15.021: INFO: Number of nodes with available pods: 0
Dec 21 13:52:15.021: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:15.980: INFO: Number of nodes with available pods: 1
Dec 21 13:52:15.980: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:16.981: INFO: Number of nodes with available pods: 2
Dec 21 13:52:16.982: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 21 13:52:17.034: INFO: Wrong image for pod: daemon-set-2jrc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:17.034: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:18.426: INFO: Wrong image for pod: daemon-set-2jrc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:18.426: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:19.145: INFO: Wrong image for pod: daemon-set-2jrc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:19.145: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:20.146: INFO: Wrong image for pod: daemon-set-2jrc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:20.146: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:21.137: INFO: Wrong image for pod: daemon-set-2jrc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:21.137: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:22.141: INFO: Wrong image for pod: daemon-set-2jrc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:22.141: INFO: Pod daemon-set-2jrc8 is not available
Dec 21 13:52:22.141: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:23.395: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:23.395: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:24.349: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:24.349: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:25.144: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:25.144: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:26.137: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:26.137: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:27.449: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:27.449: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:28.828: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:28.828: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:29.244: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:29.244: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:30.137: INFO: Pod daemon-set-kfn29 is not available
Dec 21 13:52:30.137: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:31.140: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:32.142: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:33.138: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:34.136: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:35.140: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:35.140: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:36.152: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:36.152: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:37.147: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:37.147: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:38.151: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:38.151: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:39.138: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:39.138: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:40.141: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:40.141: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:41.140: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:41.140: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:42.135: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:42.135: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:43.138: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:43.138: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:44.138: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:44.138: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:45.170: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:45.170: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:46.154: INFO: Wrong image for pod: daemon-set-qbrll. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 13:52:46.155: INFO: Pod daemon-set-qbrll is not available
Dec 21 13:52:47.138: INFO: Pod daemon-set-hdxct is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 21 13:52:47.155: INFO: Number of nodes with available pods: 1
Dec 21 13:52:47.155: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:48.270: INFO: Number of nodes with available pods: 1
Dec 21 13:52:48.270: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:49.168: INFO: Number of nodes with available pods: 1
Dec 21 13:52:49.168: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:50.203: INFO: Number of nodes with available pods: 1
Dec 21 13:52:50.203: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:51.228: INFO: Number of nodes with available pods: 1
Dec 21 13:52:51.229: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:52.166: INFO: Number of nodes with available pods: 1
Dec 21 13:52:52.166: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:53.167: INFO: Number of nodes with available pods: 1
Dec 21 13:52:53.167: INFO: Node iruya-node is running more than one daemon pod
Dec 21 13:52:54.861: INFO: Number of nodes with available pods: 2
Dec 21 13:52:54.861: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8765, will wait for the garbage collector to delete the pods
Dec 21 13:52:54.999: INFO: Deleting DaemonSet.extensions daemon-set took: 12.389643ms
Dec 21 13:52:55.299: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265766ms
Dec 21 13:53:06.616: INFO: Number of nodes with available pods: 0
Dec 21 13:53:06.616: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 13:53:06.626: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8765/daemonsets","resourceVersion":"17519250"},"items":null}

Dec 21 13:53:06.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8765/pods","resourceVersion":"17519250"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:53:06.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8765" for this suite.
Dec 21 13:53:12.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:53:12.911: INFO: namespace daemonsets-8765 deletion completed in 6.144764113s

• [SLOW TEST:68.132 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:53:12.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 21 13:53:21.596: INFO: Successfully updated pod "annotationupdatebec4c57f-ff3c-41da-b277-6056b0eb9621"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:53:23.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6632" for this suite.
Dec 21 13:53:46.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:53:46.626: INFO: namespace projected-6632 deletion completed in 22.896922839s

• [SLOW TEST:33.716 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:53:46.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5643
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 21 13:53:47.023: INFO: Found 0 stateful pods, waiting for 3
Dec 21 13:53:57.400: INFO: Found 2 stateful pods, waiting for 3
Dec 21 13:54:07.033: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:54:07.033: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:54:07.033: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 21 13:54:17.030: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:54:17.030: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:54:17.030: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 13:54:17.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 13:54:19.458: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 13:54:19.458: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 13:54:19.459: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 21 13:54:29.514: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 21 13:54:39.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 13:54:39.989: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 13:54:39.989: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 13:54:39.989: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 13:54:40.079: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:54:40.079: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:54:40.079: INFO: Waiting for Pod statefulset-5643/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:54:40.079: INFO: Waiting for Pod statefulset-5643/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:54:50.095: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:54:50.095: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:54:50.095: INFO: Waiting for Pod statefulset-5643/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:55:00.104: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:55:00.104: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:55:00.104: INFO: Waiting for Pod statefulset-5643/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:55:10.105: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:55:10.105: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 13:55:20.141: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:55:20.141: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Dec 21 13:55:30.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 13:55:30.914: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 13:55:30.914: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 13:55:30.914: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 13:55:31.014: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 21 13:55:41.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 13:55:41.446: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 13:55:41.446: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 13:55:41.446: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 13:55:51.482: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:55:51.482: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:55:51.482: INFO: Waiting for Pod statefulset-5643/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:55:51.482: INFO: Waiting for Pod statefulset-5643/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:01.502: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:56:01.502: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:01.502: INFO: Waiting for Pod statefulset-5643/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:11.497: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:56:11.497: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:11.497: INFO: Waiting for Pod statefulset-5643/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:21.496: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:56:21.496: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:31.502: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
Dec 21 13:56:31.502: INFO: Waiting for Pod statefulset-5643/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 21 13:56:41.496: INFO: Waiting for StatefulSet statefulset-5643/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 21 13:56:51.494: INFO: Deleting all statefulset in ns statefulset-5643
Dec 21 13:56:51.498: INFO: Scaling statefulset ss2 to 0
Dec 21 13:57:31.533: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 13:57:31.538: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:57:31.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5643" for this suite.
Dec 21 13:57:39.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:57:39.756: INFO: namespace statefulset-5643 deletion completed in 8.15801599s

• [SLOW TEST:233.128 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:57:39.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:57:39.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f" in namespace "projected-3504" to be "success or failure"
Dec 21 13:57:39.889: INFO: Pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.843031ms
Dec 21 13:57:41.898: INFO: Pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017609034s
Dec 21 13:57:43.913: INFO: Pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032544873s
Dec 21 13:57:45.932: INFO: Pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05161113s
Dec 21 13:57:47.944: INFO: Pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063585988s
STEP: Saw pod success
Dec 21 13:57:47.944: INFO: Pod "downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f" satisfied condition "success or failure"
Dec 21 13:57:47.947: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f container client-container: 
STEP: delete the pod
Dec 21 13:57:48.067: INFO: Waiting for pod downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f to disappear
Dec 21 13:57:48.086: INFO: Pod downwardapi-volume-0319ea0a-fb9a-42cd-ae4c-f3b3fb73ca0f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:57:48.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3504" for this suite.
Dec 21 13:57:54.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:57:54.280: INFO: namespace projected-3504 deletion completed in 6.187862926s

• [SLOW TEST:14.522 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:57:54.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 21 13:57:54.367: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:58:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6247" for this suite.
Dec 21 13:58:35.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:58:35.478: INFO: namespace init-container-6247 deletion completed in 24.351280976s

• [SLOW TEST:41.198 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:58:35.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1221 13:58:46.779185       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 13:58:46.779: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:58:46.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9682" for this suite.
Dec 21 13:59:10.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:59:10.502: INFO: namespace gc-9682 deletion completed in 23.72126105s

• [SLOW TEST:35.024 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:59:10.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 13:59:10.599: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 21 13:59:15.606: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 21 13:59:19.624: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 21 13:59:21.630: INFO: Creating deployment "test-rollover-deployment"
Dec 21 13:59:21.675: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 21 13:59:23.706: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 21 13:59:23.720: INFO: Ensure that both replica sets have 1 created replica
Dec 21 13:59:23.729: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 21 13:59:23.747: INFO: Updating deployment test-rollover-deployment
Dec 21 13:59:23.747: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 21 13:59:25.771: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 21 13:59:25.778: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 21 13:59:25.789: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:25.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533564, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:27.808: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:27.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533564, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:29.954: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:29.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533564, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:31.810: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:31.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533564, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:33.812: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:33.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:35.802: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:35.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:37.802: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:37.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:40.324: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:40.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:41.820: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:59:41.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712533561, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:59:43.812: INFO: 
Dec 21 13:59:43.812: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 21 13:59:43.825: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7500,SelfLink:/apis/apps/v1/namespaces/deployment-7500/deployments/test-rollover-deployment,UID:b8c3b0d5-7c2d-4137-a2d0-8b9241b6a742,ResourceVersion:17520436,Generation:2,CreationTimestamp:2019-12-21 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-21 13:59:21 +0000 UTC 2019-12-21 13:59:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-21 13:59:43 +0000 UTC 2019-12-21 13:59:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 21 13:59:43.834: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7500,SelfLink:/apis/apps/v1/namespaces/deployment-7500/replicasets/test-rollover-deployment-854595fc44,UID:473234d6-2bf2-4635-a44e-d9b9c917900e,ResourceVersion:17520424,Generation:2,CreationTimestamp:2019-12-21 13:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b8c3b0d5-7c2d-4137-a2d0-8b9241b6a742 0xc002e274a7 0xc002e274a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 21 13:59:43.834: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 21 13:59:43.834: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7500,SelfLink:/apis/apps/v1/namespaces/deployment-7500/replicasets/test-rollover-controller,UID:2899201f-3426-4995-91fe-145d54e93c98,ResourceVersion:17520435,Generation:2,CreationTimestamp:2019-12-21 13:59:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b8c3b0d5-7c2d-4137-a2d0-8b9241b6a742 0xc002e26f4f 0xc002e27290}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:59:43.834: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7500,SelfLink:/apis/apps/v1/namespaces/deployment-7500/replicasets/test-rollover-deployment-9b8b997cf,UID:50a1f801-8335-4ed6-8ecb-097db869052e,ResourceVersion:17520387,Generation:2,CreationTimestamp:2019-12-21 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b8c3b0d5-7c2d-4137-a2d0-8b9241b6a742 0xc002e27860 0xc002e27861}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:59:43.837: INFO: Pod "test-rollover-deployment-854595fc44-s4rmf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-s4rmf,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7500,SelfLink:/api/v1/namespaces/deployment-7500/pods/test-rollover-deployment-854595fc44-s4rmf,UID:1058f74c-402a-4b4d-861b-94673db507d4,ResourceVersion:17520407,Generation:0,CreationTimestamp:2019-12-21 13:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 473234d6-2bf2-4635-a44e-d9b9c917900e 0xc0029112d7 0xc0029112d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6wq8j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6wq8j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6wq8j true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002911350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002911370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:59:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:59:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:59:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:59:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-21 13:59:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-21 13:59:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2a43928f012fca3addf2ef2559916fa2c038c49771b1ae72886820284c820aba}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 13:59:43.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7500" for this suite.
Dec 21 13:59:49.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:59:49.961: INFO: namespace deployment-7500 deletion completed in 6.121215035s

• [SLOW TEST:39.459 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 13:59:49.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-060c193b-c016-4e26-a32f-c130e1c58838
STEP: Creating a pod to test consume secrets
Dec 21 13:59:50.124: INFO: Waiting up to 5m0s for pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3" in namespace "secrets-4120" to be "success or failure"
Dec 21 13:59:50.136: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.425946ms
Dec 21 13:59:52.148: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023497376s
Dec 21 13:59:54.159: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034994823s
Dec 21 13:59:56.171: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046134213s
Dec 21 13:59:58.178: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053223523s
Dec 21 14:00:00.185: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Running", Reason="", readiness=true. Elapsed: 10.061040244s
Dec 21 14:00:02.191: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.067039171s
STEP: Saw pod success
Dec 21 14:00:02.192: INFO: Pod "pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3" satisfied condition "success or failure"
Dec 21 14:00:02.194: INFO: Trying to get logs from node iruya-node pod pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3 container secret-volume-test: 
STEP: delete the pod
Dec 21 14:00:02.241: INFO: Waiting for pod pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3 to disappear
Dec 21 14:00:02.244: INFO: Pod pod-secrets-2d1d80b3-f518-44ed-aca6-da52b61199d3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:00:02.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4120" for this suite.
Dec 21 14:00:08.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:00:08.377: INFO: namespace secrets-4120 deletion completed in 6.128965902s

• [SLOW TEST:18.415 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:00:08.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 21 14:00:08.520: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9458" to be "success or failure"
Dec 21 14:00:08.546: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.816248ms
Dec 21 14:00:10.559: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039695251s
Dec 21 14:00:12.570: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050667277s
Dec 21 14:00:14.646: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125966657s
Dec 21 14:00:16.653: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133300966s
Dec 21 14:00:18.669: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149265226s
Dec 21 14:00:20.677: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.157097155s
STEP: Saw pod success
Dec 21 14:00:20.677: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 21 14:00:20.681: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 21 14:00:20.734: INFO: Waiting for pod pod-host-path-test to disappear
Dec 21 14:00:20.738: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:00:20.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9458" for this suite.
Dec 21 14:00:26.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:00:27.210: INFO: namespace hostpath-9458 deletion completed in 6.466482016s

• [SLOW TEST:18.832 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:00:27.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 21 14:00:27.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6252'
Dec 21 14:00:27.594: INFO: stderr: ""
Dec 21 14:00:27.594: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 21 14:00:28.602: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:28.602: INFO: Found 0 / 1
Dec 21 14:00:29.608: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:29.608: INFO: Found 0 / 1
Dec 21 14:00:30.609: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:30.609: INFO: Found 0 / 1
Dec 21 14:00:31.641: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:31.641: INFO: Found 0 / 1
Dec 21 14:00:32.608: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:32.608: INFO: Found 0 / 1
Dec 21 14:00:33.606: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:33.606: INFO: Found 0 / 1
Dec 21 14:00:34.613: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:34.613: INFO: Found 0 / 1
Dec 21 14:00:35.601: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:35.602: INFO: Found 0 / 1
Dec 21 14:00:36.609: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:36.609: INFO: Found 1 / 1
Dec 21 14:00:36.609: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 21 14:00:36.617: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:36.617: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 21 14:00:36.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-x684l --namespace=kubectl-6252 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 21 14:00:36.815: INFO: stderr: ""
Dec 21 14:00:36.815: INFO: stdout: "pod/redis-master-x684l patched\n"
STEP: checking annotations
Dec 21 14:00:36.836: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:00:36.836: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:00:36.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6252" for this suite.
Dec 21 14:00:59.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:00:59.106: INFO: namespace kubectl-6252 deletion completed in 22.26612894s

• [SLOW TEST:31.896 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:00:59.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 21 14:00:59.201: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 21 14:00:59.245: INFO: Waiting for terminating namespaces to be deleted...
Dec 21 14:00:59.247: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 21 14:00:59.258: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 21 14:00:59.258: INFO: 	Container weave ready: true, restart count 0
Dec 21 14:00:59.258: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 14:00:59.258: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.258: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 14:00:59.258: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 21 14:00:59.267: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 21 14:00:59.267: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container coredns ready: true, restart count 0
Dec 21 14:00:59.267: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container etcd ready: true, restart count 0
Dec 21 14:00:59.267: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container weave ready: true, restart count 0
Dec 21 14:00:59.267: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 14:00:59.267: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container coredns ready: true, restart count 0
Dec 21 14:00:59.267: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 21 14:00:59.267: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 14:00:59.267: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 21 14:00:59.267: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9be6ddc0-1bad-48ac-9ef5-1524a32dc61d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9be6ddc0-1bad-48ac-9ef5-1524a32dc61d off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9be6ddc0-1bad-48ac-9ef5-1524a32dc61d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:01:15.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4814" for this suite.
Dec 21 14:01:45.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:01:45.936: INFO: namespace sched-pred-4814 deletion completed in 30.236197716s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:46.830 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:01:45.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 21 14:01:54.074: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-7199ca4f-5a04-4955-aff4-ff80b529179d,GenerateName:,Namespace:events-1626,SelfLink:/api/v1/namespaces/events-1626/pods/send-events-7199ca4f-5a04-4955-aff4-ff80b529179d,UID:06d096ed-2021-4a45-9c0b-d72cd6e72730,ResourceVersion:17520780,Generation:0,CreationTimestamp:2019-12-21 14:01:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 999711830,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cznrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cznrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-cznrn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a11a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a11a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:01:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:01:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:01:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:01:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-21 14:01:46 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-21 14:01:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://2820ac63ca58fe0accbb01822e321e0a849bf776c1dc88214548d99e318018f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 21 14:01:56.083: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 21 14:01:58.089: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:01:58.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1626" for this suite.
Dec 21 14:02:38.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:02:38.407: INFO: namespace events-1626 deletion completed in 40.237491812s

• [SLOW TEST:52.471 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:02:38.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc in namespace container-probe-3045
Dec 21 14:02:46.590: INFO: Started pod liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc in namespace container-probe-3045
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 14:02:46.595: INFO: Initial restart count of pod liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc is 0
Dec 21 14:03:02.778: INFO: Restart count of pod container-probe-3045/liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc is now 1 (16.183752704s elapsed)
Dec 21 14:03:22.877: INFO: Restart count of pod container-probe-3045/liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc is now 2 (36.282737568s elapsed)
Dec 21 14:03:43.759: INFO: Restart count of pod container-probe-3045/liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc is now 3 (57.164669549s elapsed)
Dec 21 14:04:03.854: INFO: Restart count of pod container-probe-3045/liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc is now 4 (1m17.259561896s elapsed)
Dec 21 14:05:04.176: INFO: Restart count of pod container-probe-3045/liveness-9c5e65e2-5f27-4d7b-8716-2a248a3216dc is now 5 (2m17.581167561s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:05:04.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3045" for this suite.
Dec 21 14:05:10.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:05:10.416: INFO: namespace container-probe-3045 deletion completed in 6.215303905s

• [SLOW TEST:152.008 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:05:10.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 21 14:05:10.625: INFO: Waiting up to 5m0s for pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050" in namespace "emptydir-2721" to be "success or failure"
Dec 21 14:05:10.633: INFO: Pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322368ms
Dec 21 14:05:12.643: INFO: Pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017672022s
Dec 21 14:05:14.660: INFO: Pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034822709s
Dec 21 14:05:16.695: INFO: Pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069679328s
Dec 21 14:05:18.707: INFO: Pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082104208s
STEP: Saw pod success
Dec 21 14:05:18.707: INFO: Pod "pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050" satisfied condition "success or failure"
Dec 21 14:05:18.712: INFO: Trying to get logs from node iruya-node pod pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050 container test-container: 
STEP: delete the pod
Dec 21 14:05:18.787: INFO: Waiting for pod pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050 to disappear
Dec 21 14:05:18.792: INFO: Pod pod-a5e15aee-ef67-4a65-9cf2-74ac9f5f2050 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:05:18.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2721" for this suite.
Dec 21 14:05:24.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:05:25.030: INFO: namespace emptydir-2721 deletion completed in 6.232771907s

• [SLOW TEST:14.614 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:05:25.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 21 14:05:25.167: INFO: Waiting up to 5m0s for pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38" in namespace "containers-8053" to be "success or failure"
Dec 21 14:05:25.205: INFO: Pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38": Phase="Pending", Reason="", readiness=false. Elapsed: 38.58122ms
Dec 21 14:05:27.215: INFO: Pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048332807s
Dec 21 14:05:29.226: INFO: Pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059004403s
Dec 21 14:05:31.233: INFO: Pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066522386s
Dec 21 14:05:33.241: INFO: Pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074235217s
STEP: Saw pod success
Dec 21 14:05:33.241: INFO: Pod "client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38" satisfied condition "success or failure"
Dec 21 14:05:33.245: INFO: Trying to get logs from node iruya-node pod client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38 container test-container: 
STEP: delete the pod
Dec 21 14:05:33.301: INFO: Waiting for pod client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38 to disappear
Dec 21 14:05:33.347: INFO: Pod client-containers-70042e2c-9c81-42cd-9821-ec29aacb1b38 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:05:33.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8053" for this suite.
Dec 21 14:05:39.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:05:39.571: INFO: namespace containers-8053 deletion completed in 6.21851047s

• [SLOW TEST:14.540 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:05:39.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6611
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 14:05:39.717: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 14:06:17.924: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6611 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 14:06:17.924: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 14:06:18.397: INFO: Waiting for endpoints: map[]
Dec 21 14:06:18.405: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6611 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 14:06:18.405: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 14:06:18.971: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:06:18.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6611" for this suite.
Dec 21 14:06:37.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:06:37.461: INFO: namespace pod-network-test-6611 deletion completed in 18.23885693s

• [SLOW TEST:57.890 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:06:37.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-b7qw
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 14:06:37.660: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-b7qw" in namespace "subpath-315" to be "success or failure"
Dec 21 14:06:37.694: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.543018ms
Dec 21 14:06:39.701: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041202821s
Dec 21 14:06:41.724: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063729747s
Dec 21 14:06:43.748: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08771054s
Dec 21 14:06:45.757: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097469201s
Dec 21 14:06:47.769: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 10.109016474s
Dec 21 14:06:49.779: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 12.119315969s
Dec 21 14:06:51.788: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 14.127881491s
Dec 21 14:06:53.794: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 16.134251791s
Dec 21 14:06:55.805: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 18.145460632s
Dec 21 14:06:57.813: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 20.15355007s
Dec 21 14:07:00.568: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 22.907981994s
Dec 21 14:07:02.597: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 24.937116689s
Dec 21 14:07:04.603: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Running", Reason="", readiness=true. Elapsed: 26.943376797s
Dec 21 14:07:06.633: INFO: Pod "pod-subpath-test-downwardapi-b7qw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.973287004s
STEP: Saw pod success
Dec 21 14:07:06.633: INFO: Pod "pod-subpath-test-downwardapi-b7qw" satisfied condition "success or failure"
Dec 21 14:07:06.644: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-b7qw container test-container-subpath-downwardapi-b7qw: 
STEP: delete the pod
Dec 21 14:07:06.869: INFO: Waiting for pod pod-subpath-test-downwardapi-b7qw to disappear
Dec 21 14:07:06.921: INFO: Pod pod-subpath-test-downwardapi-b7qw no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-b7qw
Dec 21 14:07:06.921: INFO: Deleting pod "pod-subpath-test-downwardapi-b7qw" in namespace "subpath-315"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:07:06.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-315" for this suite.
Dec 21 14:07:12.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:07:13.081: INFO: namespace subpath-315 deletion completed in 6.132300236s

• [SLOW TEST:35.619 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:07:13.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-2915
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2915 to expose endpoints map[]
Dec 21 14:07:13.259: INFO: successfully validated that service endpoint-test2 in namespace services-2915 exposes endpoints map[] (20.314177ms elapsed)
STEP: Creating pod pod1 in namespace services-2915
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2915 to expose endpoints map[pod1:[80]]
Dec 21 14:07:17.467: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.12386518s elapsed, will retry)
Dec 21 14:07:20.628: INFO: successfully validated that service endpoint-test2 in namespace services-2915 exposes endpoints map[pod1:[80]] (7.285011057s elapsed)
STEP: Creating pod pod2 in namespace services-2915
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2915 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 21 14:07:25.794: INFO: Unexpected endpoints: found map[b50275ef-102d-42d4-b51b-164a085e2923:[80]], expected map[pod1:[80] pod2:[80]] (5.145519835s elapsed, will retry)
Dec 21 14:07:29.224: INFO: successfully validated that service endpoint-test2 in namespace services-2915 exposes endpoints map[pod1:[80] pod2:[80]] (8.574752283s elapsed)
STEP: Deleting pod pod1 in namespace services-2915
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2915 to expose endpoints map[pod2:[80]]
Dec 21 14:07:29.274: INFO: successfully validated that service endpoint-test2 in namespace services-2915 exposes endpoints map[pod2:[80]] (44.341567ms elapsed)
STEP: Deleting pod pod2 in namespace services-2915
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2915 to expose endpoints map[]
Dec 21 14:07:30.354: INFO: successfully validated that service endpoint-test2 in namespace services-2915 exposes endpoints map[] (1.02073878s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:07:30.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2915" for this suite.
Dec 21 14:07:52.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:07:52.677: INFO: namespace services-2915 deletion completed in 22.234918548s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.596 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:07:52.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:07:52.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a" in namespace "projected-1581" to be "success or failure"
Dec 21 14:07:52.812: INFO: Pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.562038ms
Dec 21 14:07:54.826: INFO: Pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035111257s
Dec 21 14:07:56.866: INFO: Pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075552581s
Dec 21 14:07:58.875: INFO: Pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084104086s
Dec 21 14:08:00.893: INFO: Pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101965151s
STEP: Saw pod success
Dec 21 14:08:00.893: INFO: Pod "downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a" satisfied condition "success or failure"
Dec 21 14:08:00.900: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a container client-container: 
STEP: delete the pod
Dec 21 14:08:01.007: INFO: Waiting for pod downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a to disappear
Dec 21 14:08:01.012: INFO: Pod downwardapi-volume-33577c89-d3f0-4b43-84bb-0114de77ef1a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:08:01.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1581" for this suite.
Dec 21 14:08:07.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:08:07.117: INFO: namespace projected-1581 deletion completed in 6.099370006s

• [SLOW TEST:14.440 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:08:07.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 21 14:08:16.350: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:08:17.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3698" for this suite.
Dec 21 14:08:39.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:08:39.625: INFO: namespace replicaset-3698 deletion completed in 22.174582745s

• [SLOW TEST:32.508 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:08:39.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:08:39.796: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 21 14:08:43.914: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:08:45.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4172" for this suite.
Dec 21 14:08:56.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:08:56.932: INFO: namespace replication-controller-4172 deletion completed in 11.477671303s

• [SLOW TEST:17.307 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:08:56.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:09:44.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9676" for this suite.
Dec 21 14:09:50.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:09:50.785: INFO: namespace namespaces-9676 deletion completed in 6.195281274s
STEP: Destroying namespace "nsdeletetest-6364" for this suite.
Dec 21 14:09:50.789: INFO: Namespace nsdeletetest-6364 was already deleted
STEP: Destroying namespace "nsdeletetest-2609" for this suite.
Dec 21 14:09:56.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:09:56.942: INFO: namespace nsdeletetest-2609 deletion completed in 6.152842801s

• [SLOW TEST:60.009 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:09:56.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:10:27.112: INFO: Container started at 2019-12-21 14:10:03 +0000 UTC, pod became ready at 2019-12-21 14:10:25 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:10:27.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5914" for this suite.
Dec 21 14:10:49.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:10:49.275: INFO: namespace container-probe-5914 deletion completed in 22.158315491s

• [SLOW TEST:52.333 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:10:49.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:10:49.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9" in namespace "projected-7777" to be "success or failure"
Dec 21 14:10:49.603: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 181.019875ms
Dec 21 14:10:51.610: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188369642s
Dec 21 14:10:53.624: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201992308s
Dec 21 14:10:55.638: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216649029s
Dec 21 14:10:57.645: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223753641s
Dec 21 14:10:59.654: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232143926s
STEP: Saw pod success
Dec 21 14:10:59.654: INFO: Pod "downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9" satisfied condition "success or failure"
Dec 21 14:10:59.658: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9 container client-container: 
STEP: delete the pod
Dec 21 14:10:59.745: INFO: Waiting for pod downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9 to disappear
Dec 21 14:10:59.751: INFO: Pod downwardapi-volume-51f36654-b384-4d6c-bd06-5678a0d0baa9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:10:59.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7777" for this suite.
Dec 21 14:11:05.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:11:05.919: INFO: namespace projected-7777 deletion completed in 6.162459328s

• [SLOW TEST:16.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:11:05.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-192
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-192
STEP: Creating statefulset with conflicting port in namespace statefulset-192
STEP: Waiting until pod test-pod will start running in namespace statefulset-192
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-192
Dec 21 14:11:16.082: INFO: Observed stateful pod in namespace: statefulset-192, name: ss-0, uid: 1a6c0b43-ebf2-4be4-a2a6-814cb78a3cd9, status phase: Pending. Waiting for statefulset controller to delete.
Dec 21 14:11:16.495: INFO: Observed stateful pod in namespace: statefulset-192, name: ss-0, uid: 1a6c0b43-ebf2-4be4-a2a6-814cb78a3cd9, status phase: Failed. Waiting for statefulset controller to delete.
Dec 21 14:11:16.621: INFO: Observed stateful pod in namespace: statefulset-192, name: ss-0, uid: 1a6c0b43-ebf2-4be4-a2a6-814cb78a3cd9, status phase: Failed. Waiting for statefulset controller to delete.
Dec 21 14:11:16.640: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-192
STEP: Removing pod with conflicting port in namespace statefulset-192
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-192 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 21 14:11:28.980: INFO: Deleting all statefulset in ns statefulset-192
Dec 21 14:11:28.984: INFO: Scaling statefulset ss to 0
Dec 21 14:11:39.034: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 14:11:39.040: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:11:39.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-192" for this suite.
Dec 21 14:11:45.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:11:45.195: INFO: namespace statefulset-192 deletion completed in 6.132939522s

• [SLOW TEST:39.275 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:11:45.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:11:45.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7" in namespace "projected-9274" to be "success or failure"
Dec 21 14:11:45.359: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.056754ms
Dec 21 14:11:47.366: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022151402s
Dec 21 14:11:49.386: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042397501s
Dec 21 14:11:51.395: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050672278s
Dec 21 14:11:53.408: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064080147s
Dec 21 14:11:55.417: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072722528s
STEP: Saw pod success
Dec 21 14:11:55.417: INFO: Pod "downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7" satisfied condition "success or failure"
Dec 21 14:11:55.424: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7 container client-container: 
STEP: delete the pod
Dec 21 14:11:55.632: INFO: Waiting for pod downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7 to disappear
Dec 21 14:11:55.663: INFO: Pod downwardapi-volume-2e4165c4-8325-48bb-919a-76e1436172f7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:11:55.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9274" for this suite.
Dec 21 14:12:01.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:12:01.855: INFO: namespace projected-9274 deletion completed in 6.185750662s

• [SLOW TEST:16.660 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:12:01.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f739ad34-f142-4fbd-92a4-601cafe81e58
STEP: Creating a pod to test consume configMaps
Dec 21 14:12:02.036: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc" in namespace "projected-4406" to be "success or failure"
Dec 21 14:12:02.050: INFO: Pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.955006ms
Dec 21 14:12:04.061: INFO: Pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024794685s
Dec 21 14:12:06.067: INFO: Pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031090955s
Dec 21 14:12:08.089: INFO: Pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053519967s
Dec 21 14:12:10.100: INFO: Pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064498278s
STEP: Saw pod success
Dec 21 14:12:10.100: INFO: Pod "pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc" satisfied condition "success or failure"
Dec 21 14:12:10.104: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 14:12:10.246: INFO: Waiting for pod pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc to disappear
Dec 21 14:12:10.262: INFO: Pod pod-projected-configmaps-6da3581d-ec7f-4c8f-af87-dd46ab640ebc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:12:10.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4406" for this suite.
Dec 21 14:12:16.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:12:16.478: INFO: namespace projected-4406 deletion completed in 6.207110816s

• [SLOW TEST:14.622 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:12:16.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-86c256f3-ff4b-4b23-be24-544d007944aa
STEP: Creating a pod to test consume configMaps
Dec 21 14:12:16.600: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce" in namespace "projected-4270" to be "success or failure"
Dec 21 14:12:16.637: INFO: Pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 36.823776ms
Dec 21 14:12:18.644: INFO: Pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043895561s
Dec 21 14:12:20.653: INFO: Pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053375272s
Dec 21 14:12:22.661: INFO: Pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061099387s
Dec 21 14:12:24.669: INFO: Pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06939822s
STEP: Saw pod success
Dec 21 14:12:24.670: INFO: Pod "pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce" satisfied condition "success or failure"
Dec 21 14:12:24.672: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 14:12:24.860: INFO: Waiting for pod pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce to disappear
Dec 21 14:12:24.884: INFO: Pod pod-projected-configmaps-34699fe0-60ef-456d-9d22-f637c5f1d1ce no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:12:24.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4270" for this suite.
Dec 21 14:12:30.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:12:31.057: INFO: namespace projected-4270 deletion completed in 6.14393963s

• [SLOW TEST:14.579 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:12:31.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 21 14:12:31.165: INFO: Waiting up to 5m0s for pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c" in namespace "emptydir-5697" to be "success or failure"
Dec 21 14:12:31.184: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.155645ms
Dec 21 14:12:33.194: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029639748s
Dec 21 14:12:35.199: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033826039s
Dec 21 14:12:37.209: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044235944s
Dec 21 14:12:39.219: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054233721s
Dec 21 14:12:41.227: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06189394s
STEP: Saw pod success
Dec 21 14:12:41.227: INFO: Pod "pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c" satisfied condition "success or failure"
Dec 21 14:12:41.231: INFO: Trying to get logs from node iruya-node pod pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c container test-container: 
STEP: delete the pod
Dec 21 14:12:41.284: INFO: Waiting for pod pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c to disappear
Dec 21 14:12:41.320: INFO: Pod pod-df19c6ec-9ec6-4914-a7c1-cf8733d18c6c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:12:41.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5697" for this suite.
Dec 21 14:12:47.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:12:47.466: INFO: namespace emptydir-5697 deletion completed in 6.137647976s

• [SLOW TEST:16.408 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:12:47.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 21 14:12:55.704: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:12:55.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9404" for this suite.
Dec 21 14:13:01.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:13:02.047: INFO: namespace container-runtime-9404 deletion completed in 6.129461634s

• [SLOW TEST:14.581 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:13:02.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:13:02.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 21 14:13:02.243: INFO: stderr: ""
Dec 21 14:13:02.243: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:13:02.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4298" for this suite.
Dec 21 14:13:08.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:13:08.396: INFO: namespace kubectl-4298 deletion completed in 6.1478428s

• [SLOW TEST:6.349 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:13:08.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3708.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3708.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 14:13:20.610: INFO: File wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-56e7dedc-0085-4c13-9db3-0fae70df4ddc contains '' instead of 'foo.example.com.'
Dec 21 14:13:20.628: INFO: File jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-56e7dedc-0085-4c13-9db3-0fae70df4ddc contains '' instead of 'foo.example.com.'
Dec 21 14:13:20.628: INFO: Lookups using dns-3708/dns-test-56e7dedc-0085-4c13-9db3-0fae70df4ddc failed for: [wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local]

Dec 21 14:13:25.654: INFO: DNS probes using dns-test-56e7dedc-0085-4c13-9db3-0fae70df4ddc succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3708.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3708.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 14:13:44.659: INFO: File wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a contains '' instead of 'bar.example.com.'
Dec 21 14:13:44.675: INFO: File jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a contains '' instead of 'bar.example.com.'
Dec 21 14:13:44.675: INFO: Lookups using dns-3708/dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a failed for: [wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local]

Dec 21 14:13:49.689: INFO: File wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 21 14:13:49.702: INFO: File jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 21 14:13:49.702: INFO: Lookups using dns-3708/dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a failed for: [wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local]

Dec 21 14:13:54.700: INFO: DNS probes using dns-test-7806a4f4-d1bd-45a3-900e-3f766bf69a7a succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3708.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3708.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 14:14:11.144: INFO: File wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-8034dfc6-a743-46f4-a8f7-8211101941ac contains '' instead of '10.104.33.14'
Dec 21 14:14:11.151: INFO: File jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-8034dfc6-a743-46f4-a8f7-8211101941ac contains '' instead of '10.104.33.14'
Dec 21 14:14:11.151: INFO: Lookups using dns-3708/dns-test-8034dfc6-a743-46f4-a8f7-8211101941ac failed for: [wheezy_udp@dns-test-service-3.dns-3708.svc.cluster.local jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local]

Dec 21 14:14:16.185: INFO: File jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local from pod  dns-3708/dns-test-8034dfc6-a743-46f4-a8f7-8211101941ac contains '' instead of '10.104.33.14'
Dec 21 14:14:16.185: INFO: Lookups using dns-3708/dns-test-8034dfc6-a743-46f4-a8f7-8211101941ac failed for: [jessie_udp@dns-test-service-3.dns-3708.svc.cluster.local]

Dec 21 14:14:21.169: INFO: DNS probes using dns-test-8034dfc6-a743-46f4-a8f7-8211101941ac succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:14:21.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3708" for this suite.
Dec 21 14:14:29.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:14:29.911: INFO: namespace dns-3708 deletion completed in 8.484338s

• [SLOW TEST:81.515 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:14:29.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 21 14:14:29.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7745'
Dec 21 14:14:32.022: INFO: stderr: ""
Dec 21 14:14:32.022: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 14:14:32.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:32.199: INFO: stderr: ""
Dec 21 14:14:32.199: INFO: stdout: "update-demo-nautilus-jl4rh update-demo-nautilus-r4fgw "
Dec 21 14:14:32.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:32.300: INFO: stderr: ""
Dec 21 14:14:32.300: INFO: stdout: ""
Dec 21 14:14:32.300: INFO: update-demo-nautilus-jl4rh is created but not running
Dec 21 14:14:37.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:39.616: INFO: stderr: ""
Dec 21 14:14:39.616: INFO: stdout: "update-demo-nautilus-jl4rh update-demo-nautilus-r4fgw "
Dec 21 14:14:39.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:40.165: INFO: stderr: ""
Dec 21 14:14:40.166: INFO: stdout: ""
Dec 21 14:14:40.166: INFO: update-demo-nautilus-jl4rh is created but not running
Dec 21 14:14:45.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:45.322: INFO: stderr: ""
Dec 21 14:14:45.322: INFO: stdout: "update-demo-nautilus-jl4rh update-demo-nautilus-r4fgw "
Dec 21 14:14:45.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:45.449: INFO: stderr: ""
Dec 21 14:14:45.449: INFO: stdout: "true"
Dec 21 14:14:45.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:45.555: INFO: stderr: ""
Dec 21 14:14:45.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:14:45.555: INFO: validating pod update-demo-nautilus-jl4rh
Dec 21 14:14:45.562: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:14:45.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:14:45.562: INFO: update-demo-nautilus-jl4rh is verified up and running
Dec 21 14:14:45.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4fgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:45.647: INFO: stderr: ""
Dec 21 14:14:45.647: INFO: stdout: "true"
Dec 21 14:14:45.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4fgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:45.739: INFO: stderr: ""
Dec 21 14:14:45.739: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:14:45.739: INFO: validating pod update-demo-nautilus-r4fgw
Dec 21 14:14:45.751: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:14:45.751: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:14:45.751: INFO: update-demo-nautilus-r4fgw is verified up and running
STEP: scaling down the replication controller
Dec 21 14:14:45.753: INFO: scanned /root for discovery docs: 
Dec 21 14:14:45.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7745'
Dec 21 14:14:46.894: INFO: stderr: ""
Dec 21 14:14:46.894: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 14:14:46.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:47.019: INFO: stderr: ""
Dec 21 14:14:47.019: INFO: stdout: "update-demo-nautilus-jl4rh update-demo-nautilus-r4fgw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 21 14:14:52.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:52.172: INFO: stderr: ""
Dec 21 14:14:52.172: INFO: stdout: "update-demo-nautilus-jl4rh "
Dec 21 14:14:52.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:52.269: INFO: stderr: ""
Dec 21 14:14:52.270: INFO: stdout: "true"
Dec 21 14:14:52.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:52.363: INFO: stderr: ""
Dec 21 14:14:52.364: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:14:52.364: INFO: validating pod update-demo-nautilus-jl4rh
Dec 21 14:14:52.369: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:14:52.369: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:14:52.369: INFO: update-demo-nautilus-jl4rh is verified up and running
STEP: scaling up the replication controller
Dec 21 14:14:52.371: INFO: scanned /root for discovery docs: 
Dec 21 14:14:52.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7745'
Dec 21 14:14:53.472: INFO: stderr: ""
Dec 21 14:14:53.472: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 14:14:53.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:53.607: INFO: stderr: ""
Dec 21 14:14:53.608: INFO: stdout: "update-demo-nautilus-98tb8 update-demo-nautilus-jl4rh "
Dec 21 14:14:53.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tb8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:53.684: INFO: stderr: ""
Dec 21 14:14:53.684: INFO: stdout: ""
Dec 21 14:14:53.684: INFO: update-demo-nautilus-98tb8 is created but not running
Dec 21 14:14:58.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:14:58.798: INFO: stderr: ""
Dec 21 14:14:58.798: INFO: stdout: "update-demo-nautilus-98tb8 update-demo-nautilus-jl4rh "
Dec 21 14:14:58.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tb8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:14:58.908: INFO: stderr: ""
Dec 21 14:14:58.908: INFO: stdout: ""
Dec 21 14:14:58.908: INFO: update-demo-nautilus-98tb8 is created but not running
Dec 21 14:15:03.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7745'
Dec 21 14:15:04.017: INFO: stderr: ""
Dec 21 14:15:04.017: INFO: stdout: "update-demo-nautilus-98tb8 update-demo-nautilus-jl4rh "
Dec 21 14:15:04.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tb8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:15:04.127: INFO: stderr: ""
Dec 21 14:15:04.127: INFO: stdout: "true"
Dec 21 14:15:04.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tb8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:15:04.221: INFO: stderr: ""
Dec 21 14:15:04.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:15:04.222: INFO: validating pod update-demo-nautilus-98tb8
Dec 21 14:15:04.246: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:15:04.246: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:15:04.246: INFO: update-demo-nautilus-98tb8 is verified up and running
Dec 21 14:15:04.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:15:04.341: INFO: stderr: ""
Dec 21 14:15:04.341: INFO: stdout: "true"
Dec 21 14:15:04.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl4rh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7745'
Dec 21 14:15:04.415: INFO: stderr: ""
Dec 21 14:15:04.415: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:15:04.415: INFO: validating pod update-demo-nautilus-jl4rh
Dec 21 14:15:04.420: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:15:04.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:15:04.421: INFO: update-demo-nautilus-jl4rh is verified up and running
STEP: using delete to clean up resources
Dec 21 14:15:04.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7745'
Dec 21 14:15:04.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:15:04.545: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 21 14:15:04.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7745'
Dec 21 14:15:04.631: INFO: stderr: "No resources found.\n"
Dec 21 14:15:04.631: INFO: stdout: ""
Dec 21 14:15:04.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7745 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 14:15:04.695: INFO: stderr: ""
Dec 21 14:15:04.695: INFO: stdout: "update-demo-nautilus-98tb8\nupdate-demo-nautilus-jl4rh\n"
Dec 21 14:15:05.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7745'
Dec 21 14:15:05.319: INFO: stderr: "No resources found.\n"
Dec 21 14:15:05.319: INFO: stdout: ""
Dec 21 14:15:05.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7745 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 14:15:05.416: INFO: stderr: ""
Dec 21 14:15:05.416: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:15:05.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7745" for this suite.
Dec 21 14:15:28.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:15:28.344: INFO: namespace kubectl-7745 deletion completed in 22.921157244s

• [SLOW TEST:58.432 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:15:28.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-d66d5115-c7f8-45ac-adbc-0fba3e6b2c30
STEP: Creating a pod to test consume secrets
Dec 21 14:15:28.441: INFO: Waiting up to 5m0s for pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf" in namespace "secrets-3692" to be "success or failure"
Dec 21 14:15:28.451: INFO: Pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.448923ms
Dec 21 14:15:30.465: INFO: Pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024732694s
Dec 21 14:15:32.476: INFO: Pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034905012s
Dec 21 14:15:34.573: INFO: Pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131895371s
Dec 21 14:15:36.584: INFO: Pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142822292s
STEP: Saw pod success
Dec 21 14:15:36.584: INFO: Pod "pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf" satisfied condition "success or failure"
Dec 21 14:15:36.593: INFO: Trying to get logs from node iruya-node pod pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf container secret-volume-test: 
STEP: delete the pod
Dec 21 14:15:37.015: INFO: Waiting for pod pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf to disappear
Dec 21 14:15:37.020: INFO: Pod pod-secrets-68380350-c5a8-48e2-837f-7c79a9e229bf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:15:37.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3692" for this suite.
Dec 21 14:15:43.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:15:43.202: INFO: namespace secrets-3692 deletion completed in 6.17795959s

• [SLOW TEST:14.857 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:15:43.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-h9kh
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 14:15:43.370: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h9kh" in namespace "subpath-6157" to be "success or failure"
Dec 21 14:15:43.392: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Pending", Reason="", readiness=false. Elapsed: 21.105918ms
Dec 21 14:15:45.398: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02739311s
Dec 21 14:15:47.411: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04028409s
Dec 21 14:15:49.464: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094039259s
Dec 21 14:15:51.492: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121351375s
Dec 21 14:15:53.498: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 10.127695308s
Dec 21 14:15:55.506: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 12.13520279s
Dec 21 14:15:57.521: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 14.150125739s
Dec 21 14:15:59.904: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 16.533390135s
Dec 21 14:16:01.911: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 18.540237277s
Dec 21 14:16:03.919: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 20.548474359s
Dec 21 14:16:05.925: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 22.554698021s
Dec 21 14:16:07.931: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 24.560637199s
Dec 21 14:16:09.940: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 26.569980318s
Dec 21 14:16:11.948: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Running", Reason="", readiness=true. Elapsed: 28.577863148s
Dec 21 14:16:13.956: INFO: Pod "pod-subpath-test-configmap-h9kh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.585379753s
STEP: Saw pod success
Dec 21 14:16:13.956: INFO: Pod "pod-subpath-test-configmap-h9kh" satisfied condition "success or failure"
Dec 21 14:16:13.961: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-h9kh container test-container-subpath-configmap-h9kh: 
STEP: delete the pod
Dec 21 14:16:14.234: INFO: Waiting for pod pod-subpath-test-configmap-h9kh to disappear
Dec 21 14:16:14.244: INFO: Pod pod-subpath-test-configmap-h9kh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-h9kh
Dec 21 14:16:14.245: INFO: Deleting pod "pod-subpath-test-configmap-h9kh" in namespace "subpath-6157"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:16:14.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6157" for this suite.
Dec 21 14:16:20.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:16:20.492: INFO: namespace subpath-6157 deletion completed in 6.236626772s

• [SLOW TEST:37.289 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:16:20.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-348
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 14:16:20.549: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 14:17:00.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-348 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 14:17:00.764: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 14:17:01.231: INFO: Waiting for endpoints: map[]
Dec 21 14:17:01.239: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-348 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 14:17:01.239: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 14:17:01.552: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:17:01.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-348" for this suite.
Dec 21 14:17:25.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:17:25.696: INFO: namespace pod-network-test-348 deletion completed in 24.137438612s

• [SLOW TEST:65.204 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:17:25.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-6fcc8eb6-947f-4ad6-9fb9-08a8c3f9cf15 in namespace container-probe-377
Dec 21 14:17:35.881: INFO: Started pod test-webserver-6fcc8eb6-947f-4ad6-9fb9-08a8c3f9cf15 in namespace container-probe-377
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 14:17:35.888: INFO: Initial restart count of pod test-webserver-6fcc8eb6-947f-4ad6-9fb9-08a8c3f9cf15 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:21:36.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-377" for this suite.
Dec 21 14:21:42.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:21:42.456: INFO: namespace container-probe-377 deletion completed in 6.114814209s

• [SLOW TEST:256.758 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:21:42.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-c108496d-d7fc-4cab-8e25-94df8984b047
STEP: Creating a pod to test consume secrets
Dec 21 14:21:42.654: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de" in namespace "projected-2580" to be "success or failure"
Dec 21 14:21:42.667: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693935ms
Dec 21 14:21:44.679: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024966351s
Dec 21 14:21:46.688: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033807763s
Dec 21 14:21:48.701: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047006677s
Dec 21 14:21:50.719: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de": Phase="Running", Reason="", readiness=true. Elapsed: 8.064940109s
Dec 21 14:21:52.727: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073197572s
STEP: Saw pod success
Dec 21 14:21:52.727: INFO: Pod "pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de" satisfied condition "success or failure"
Dec 21 14:21:52.732: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 14:21:52.828: INFO: Waiting for pod pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de to disappear
Dec 21 14:21:52.901: INFO: Pod pod-projected-secrets-ad1e4e7a-e594-494e-a17a-118721a9b4de no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:21:52.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2580" for this suite.
Dec 21 14:21:58.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:21:59.054: INFO: namespace projected-2580 deletion completed in 6.143622759s

• [SLOW TEST:16.598 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:21:59.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-8d4b18a6-ab7f-440f-8fc7-6b2891f658eb in namespace container-probe-7621
Dec 21 14:22:07.135: INFO: Started pod busybox-8d4b18a6-ab7f-440f-8fc7-6b2891f658eb in namespace container-probe-7621
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 14:22:07.139: INFO: Initial restart count of pod busybox-8d4b18a6-ab7f-440f-8fc7-6b2891f658eb is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:26:07.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7621" for this suite.
Dec 21 14:26:13.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:26:13.840: INFO: namespace container-probe-7621 deletion completed in 6.173693236s

• [SLOW TEST:254.786 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:26:13.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 14:26:13.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9424'
Dec 21 14:26:15.968: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 14:26:15.969: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 21 14:26:16.012: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 21 14:26:16.033: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 21 14:26:16.089: INFO: scanned /root for discovery docs: 
Dec 21 14:26:16.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9424'
Dec 21 14:26:38.696: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 21 14:26:38.696: INFO: stdout: "Created e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97\nScaling up e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 21 14:26:38.696: INFO: stdout: "Created e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97\nScaling up e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 21 14:26:38.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9424'
Dec 21 14:26:38.831: INFO: stderr: ""
Dec 21 14:26:38.831: INFO: stdout: "e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97-4c2gg "
Dec 21 14:26:38.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97-4c2gg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9424'
Dec 21 14:26:38.983: INFO: stderr: ""
Dec 21 14:26:38.983: INFO: stdout: "true"
Dec 21 14:26:38.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97-4c2gg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9424'
Dec 21 14:26:39.088: INFO: stderr: ""
Dec 21 14:26:39.088: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 21 14:26:39.088: INFO: e2e-test-nginx-rc-88beddf747bef6fd257b1289a3915f97-4c2gg is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 21 14:26:39.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9424'
Dec 21 14:26:39.198: INFO: stderr: ""
Dec 21 14:26:39.198: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:26:39.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9424" for this suite.
Dec 21 14:27:01.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:27:01.449: INFO: namespace kubectl-9424 deletion completed in 22.221825131s

• [SLOW TEST:47.609 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:27:01.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-751d4f03-8552-4cfb-bc53-82ea40baf681
STEP: Creating a pod to test consume configMaps
Dec 21 14:27:01.575: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d" in namespace "projected-8560" to be "success or failure"
Dec 21 14:27:01.587: INFO: Pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.979646ms
Dec 21 14:27:03.598: INFO: Pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022731426s
Dec 21 14:27:05.620: INFO: Pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044614578s
Dec 21 14:27:07.629: INFO: Pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053314123s
Dec 21 14:27:09.636: INFO: Pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060989655s
STEP: Saw pod success
Dec 21 14:27:09.636: INFO: Pod "pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d" satisfied condition "success or failure"
Dec 21 14:27:09.641: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 14:27:09.798: INFO: Waiting for pod pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d to disappear
Dec 21 14:27:09.806: INFO: Pod pod-projected-configmaps-29eb381c-4954-4d96-8b0d-46594a126e5d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:27:09.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8560" for this suite.
Dec 21 14:27:15.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:27:15.991: INFO: namespace projected-8560 deletion completed in 6.17575863s

• [SLOW TEST:14.542 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:27:15.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:27:16.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:27:24.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2743" for this suite.
Dec 21 14:28:28.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:28:28.957: INFO: namespace pods-2743 deletion completed in 1m4.43203298s

• [SLOW TEST:72.967 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:28:28.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-6270ed3e-24a3-4713-b8f7-4bfb57656185
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:28:29.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7372" for this suite.
Dec 21 14:28:35.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:28:35.348: INFO: namespace configmap-7372 deletion completed in 6.19126785s

• [SLOW TEST:6.390 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:28:35.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:28:35.464: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 21 14:28:35.485: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 21 14:28:40.500: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 21 14:28:42.516: INFO: Creating deployment "test-rolling-update-deployment"
Dec 21 14:28:42.531: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 21 14:28:42.663: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 21 14:28:44.678: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 21 14:28:44.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 14:28:46.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 14:28:48.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 14:28:50.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535330, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712535322, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 14:28:52.697: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 21 14:28:52.727: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6537,SelfLink:/apis/apps/v1/namespaces/deployment-6537/deployments/test-rolling-update-deployment,UID:f7287c26-57e5-4f3b-8191-3745550f79e8,ResourceVersion:17524216,Generation:1,CreationTimestamp:2019-12-21 14:28:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-21 14:28:42 +0000 UTC 2019-12-21 14:28:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-21 14:28:50 +0000 UTC 2019-12-21 14:28:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 21 14:28:52.731: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6537,SelfLink:/apis/apps/v1/namespaces/deployment-6537/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:5a353fe4-19ec-40e0-a5b5-f6d36e1a231d,ResourceVersion:17524205,Generation:1,CreationTimestamp:2019-12-21 14:28:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f7287c26-57e5-4f3b-8191-3745550f79e8 0xc0009c87f7 0xc0009c87f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 21 14:28:52.731: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 21 14:28:52.731: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6537,SelfLink:/apis/apps/v1/namespaces/deployment-6537/replicasets/test-rolling-update-controller,UID:29f21a9c-f668-4273-970d-bbb5df789a06,ResourceVersion:17524215,Generation:2,CreationTimestamp:2019-12-21 14:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f7287c26-57e5-4f3b-8191-3745550f79e8 0xc0009c8717 0xc0009c8718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 14:28:52.735: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-t457f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-t457f,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6537,SelfLink:/api/v1/namespaces/deployment-6537/pods/test-rolling-update-deployment-79f6b9d75c-t457f,UID:34da46cd-0840-488f-8a76-76060f79355e,ResourceVersion:17524204,Generation:0,CreationTimestamp:2019-12-21 14:28:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 5a353fe4-19ec-40e0-a5b5-f6d36e1a231d 0xc0009c9547 0xc0009c9548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x9wns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x9wns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-x9wns true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009c95c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009c95e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:28:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:28:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:28:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:28:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-21 14:28:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-21 14:28:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://50c3c99ff1f1f9049a1e0ef7f8b9a4373c6b1b8e6a61f74f143d3716c279f41e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:28:52.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6537" for this suite.
Dec 21 14:28:58.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:28:58.879: INFO: namespace deployment-6537 deletion completed in 6.138839118s

• [SLOW TEST:23.531 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:28:58.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 21 14:28:59.045: INFO: Waiting up to 5m0s for pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af" in namespace "emptydir-5925" to be "success or failure"
Dec 21 14:28:59.056: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.785681ms
Dec 21 14:29:01.063: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018540542s
Dec 21 14:29:03.131: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085701241s
Dec 21 14:29:05.137: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092630542s
Dec 21 14:29:07.145: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100658263s
Dec 21 14:29:09.160: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115028403s
STEP: Saw pod success
Dec 21 14:29:09.160: INFO: Pod "pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af" satisfied condition "success or failure"
Dec 21 14:29:09.163: INFO: Trying to get logs from node iruya-node pod pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af container test-container: 
STEP: delete the pod
Dec 21 14:29:09.327: INFO: Waiting for pod pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af to disappear
Dec 21 14:29:09.336: INFO: Pod pod-e9fc8cc5-18c2-4d0c-836b-682886cff4af no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:29:09.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5925" for this suite.
Dec 21 14:29:15.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:29:15.524: INFO: namespace emptydir-5925 deletion completed in 6.181100101s

• [SLOW TEST:16.644 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:29:15.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 21 14:29:15.686: INFO: Waiting up to 5m0s for pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c" in namespace "emptydir-8309" to be "success or failure"
Dec 21 14:29:15.695: INFO: Pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.323741ms
Dec 21 14:29:17.705: INFO: Pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019158695s
Dec 21 14:29:19.745: INFO: Pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059559006s
Dec 21 14:29:21.778: INFO: Pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092198402s
Dec 21 14:29:23.786: INFO: Pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100202401s
STEP: Saw pod success
Dec 21 14:29:23.786: INFO: Pod "pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c" satisfied condition "success or failure"
Dec 21 14:29:23.792: INFO: Trying to get logs from node iruya-node pod pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c container test-container: 
STEP: delete the pod
Dec 21 14:29:23.878: INFO: Waiting for pod pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c to disappear
Dec 21 14:29:23.883: INFO: Pod pod-4b5e0615-489c-4a95-a200-e9cc288d3c9c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:29:23.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8309" for this suite.
Dec 21 14:29:29.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:29:30.059: INFO: namespace emptydir-8309 deletion completed in 6.169824461s

• [SLOW TEST:14.535 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:29:30.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:29:30.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa" in namespace "projected-8523" to be "success or failure"
Dec 21 14:29:30.195: INFO: Pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.945685ms
Dec 21 14:29:32.222: INFO: Pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034079935s
Dec 21 14:29:34.281: INFO: Pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09287087s
Dec 21 14:29:36.287: INFO: Pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099604615s
Dec 21 14:29:38.294: INFO: Pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106099062s
STEP: Saw pod success
Dec 21 14:29:38.294: INFO: Pod "downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa" satisfied condition "success or failure"
Dec 21 14:29:38.304: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa container client-container: 
STEP: delete the pod
Dec 21 14:29:38.383: INFO: Waiting for pod downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa to disappear
Dec 21 14:29:38.435: INFO: Pod downwardapi-volume-4edebcf3-c7ee-4b86-8442-deb1d1fe63fa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:29:38.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8523" for this suite.
Dec 21 14:29:44.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:29:44.574: INFO: namespace projected-8523 deletion completed in 6.132136793s

• [SLOW TEST:14.515 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:29:44.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 21 14:29:44.751: INFO: Waiting up to 5m0s for pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7" in namespace "emptydir-132" to be "success or failure"
Dec 21 14:29:44.773: INFO: Pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.215886ms
Dec 21 14:29:46.789: INFO: Pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038722407s
Dec 21 14:29:49.329: INFO: Pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.57838442s
Dec 21 14:29:51.339: INFO: Pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588337346s
Dec 21 14:29:53.345: INFO: Pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.593908559s
STEP: Saw pod success
Dec 21 14:29:53.345: INFO: Pod "pod-0f74bd77-630b-49fc-a724-21dc18db9fc7" satisfied condition "success or failure"
Dec 21 14:29:53.348: INFO: Trying to get logs from node iruya-node pod pod-0f74bd77-630b-49fc-a724-21dc18db9fc7 container test-container: 
STEP: delete the pod
Dec 21 14:29:53.550: INFO: Waiting for pod pod-0f74bd77-630b-49fc-a724-21dc18db9fc7 to disappear
Dec 21 14:29:53.558: INFO: Pod pod-0f74bd77-630b-49fc-a724-21dc18db9fc7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:29:53.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-132" for this suite.
Dec 21 14:29:59.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:29:59.740: INFO: namespace emptydir-132 deletion completed in 6.177773139s

• [SLOW TEST:15.165 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:29:59.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 21 14:30:20.778: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:20.809: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:22.809: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:22.817: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:24.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:24.830: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:26.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:26.817: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:28.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:28.818: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:30.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:30.815: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:32.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:32.863: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:34.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:34.818: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 14:30:36.810: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 14:30:36.814: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:30:36.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1687" for this suite.
Dec 21 14:31:00.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:31:01.118: INFO: namespace container-lifecycle-hook-1687 deletion completed in 24.277359382s

• [SLOW TEST:61.377 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:31:01.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:31:01.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1246'
Dec 21 14:31:01.587: INFO: stderr: ""
Dec 21 14:31:01.587: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 21 14:31:01.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1246'
Dec 21 14:31:02.309: INFO: stderr: ""
Dec 21 14:31:02.309: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 21 14:31:03.319: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:03.319: INFO: Found 0 / 1
Dec 21 14:31:04.317: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:04.317: INFO: Found 0 / 1
Dec 21 14:31:05.319: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:05.319: INFO: Found 0 / 1
Dec 21 14:31:06.329: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:06.329: INFO: Found 0 / 1
Dec 21 14:31:07.319: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:07.319: INFO: Found 0 / 1
Dec 21 14:31:08.316: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:08.316: INFO: Found 0 / 1
Dec 21 14:31:09.318: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:09.318: INFO: Found 1 / 1
Dec 21 14:31:09.318: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 21 14:31:09.324: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:31:09.324: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 21 14:31:09.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-752q4 --namespace=kubectl-1246'
Dec 21 14:31:09.495: INFO: stderr: ""
Dec 21 14:31:09.495: INFO: stdout: "Name:           redis-master-752q4\nNamespace:      kubectl-1246\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 21 Dec 2019 14:31:01 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://e1f52ff8f147bc08fb947ebbd38554556e6dc8df771ddafabbaf50f8884f37a9\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 21 Dec 2019 14:31:07 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xfzql (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-xfzql:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-xfzql\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-1246/redis-master-752q4 to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Dec 21 14:31:09.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1246'
Dec 21 14:31:09.697: INFO: stderr: ""
Dec 21 14:31:09.697: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1246\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-752q4\n"
Dec 21 14:31:09.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1246'
Dec 21 14:31:09.839: INFO: stderr: ""
Dec 21 14:31:09.839: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1246\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.97.118.49\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 21 14:31:09.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 21 14:31:09.980: INFO: stderr: ""
Dec 21 14:31:09.980: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 21 Dec 2019 14:31:00 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 21 Dec 2019 14:31:00 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 21 Dec 2019 14:31:00 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 21 Dec 2019 14:31:00 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         139d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         70d\n  kubectl-1246               redis-master-752q4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 21 14:31:09.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1246'
Dec 21 14:31:10.104: INFO: stderr: ""
Dec 21 14:31:10.105: INFO: stdout: "Name:         kubectl-1246\nLabels:       e2e-framework=kubectl\n              e2e-run=75c1bde1-df79-4ac6-8f79-c27ea85ea247\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:31:10.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1246" for this suite.
Dec 21 14:31:32.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:31:32.288: INFO: namespace kubectl-1246 deletion completed in 22.179456774s

• [SLOW TEST:31.169 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:31:32.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:32:25.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4037" for this suite.
Dec 21 14:32:31.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:32:32.037: INFO: namespace container-runtime-4037 deletion completed in 6.260138932s

• [SLOW TEST:59.749 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:32:32.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:33:32.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4040" for this suite.
Dec 21 14:33:58.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:33:58.357: INFO: namespace container-probe-4040 deletion completed in 26.169624876s

• [SLOW TEST:86.319 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:33:58.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 21 14:34:07.112: INFO: Successfully updated pod "pod-update-activedeadlineseconds-578537f4-5949-40ef-af18-e60246e619af"
Dec 21 14:34:07.112: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-578537f4-5949-40ef-af18-e60246e619af" in namespace "pods-1676" to be "terminated due to deadline exceeded"
Dec 21 14:34:07.118: INFO: Pod "pod-update-activedeadlineseconds-578537f4-5949-40ef-af18-e60246e619af": Phase="Running", Reason="", readiness=true. Elapsed: 6.626969ms
Dec 21 14:34:09.127: INFO: Pod "pod-update-activedeadlineseconds-578537f4-5949-40ef-af18-e60246e619af": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01562454s
Dec 21 14:34:09.128: INFO: Pod "pod-update-activedeadlineseconds-578537f4-5949-40ef-af18-e60246e619af" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:34:09.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1676" for this suite.
Dec 21 14:34:15.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:34:15.309: INFO: namespace pods-1676 deletion completed in 6.171925418s

• [SLOW TEST:16.950 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:34:15.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 21 14:34:15.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 21 14:34:15.495: INFO: stderr: ""
Dec 21 14:34:15.495: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:34:15.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7992" for this suite.
Dec 21 14:34:21.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:34:21.694: INFO: namespace kubectl-7992 deletion completed in 6.192627735s

• [SLOW TEST:6.385 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:34:21.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:34:29.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7504" for this suite.
Dec 21 14:35:21.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:35:22.040: INFO: namespace kubelet-test-7504 deletion completed in 52.114646522s

• [SLOW TEST:60.346 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:35:22.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 21 14:35:22.168: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:35:37.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-540" for this suite.
Dec 21 14:35:43.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:35:43.267: INFO: namespace pods-540 deletion completed in 6.168433278s

• [SLOW TEST:21.227 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:35:43.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ab1614ee-648e-453b-90ac-4c77ee0344df
STEP: Creating a pod to test consume secrets
Dec 21 14:35:43.535: INFO: Waiting up to 5m0s for pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc" in namespace "secrets-1805" to be "success or failure"
Dec 21 14:35:43.592: INFO: Pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc": Phase="Pending", Reason="", readiness=false. Elapsed: 57.377142ms
Dec 21 14:35:45.602: INFO: Pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067260306s
Dec 21 14:35:47.613: INFO: Pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077741864s
Dec 21 14:35:49.685: INFO: Pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150467007s
Dec 21 14:35:51.696: INFO: Pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160621355s
STEP: Saw pod success
Dec 21 14:35:51.696: INFO: Pod "pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc" satisfied condition "success or failure"
Dec 21 14:35:51.700: INFO: Trying to get logs from node iruya-node pod pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc container secret-env-test: 
STEP: delete the pod
Dec 21 14:35:51.858: INFO: Waiting for pod pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc to disappear
Dec 21 14:35:51.876: INFO: Pod pod-secrets-c1942ed0-b1ac-4238-83ed-58bb637ebffc no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:35:51.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1805" for this suite.
Dec 21 14:35:57.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:35:58.080: INFO: namespace secrets-1805 deletion completed in 6.17185485s

• [SLOW TEST:14.813 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:35:58.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 21 14:35:58.173: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix657104336/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:35:58.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8746" for this suite.
Dec 21 14:36:04.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:36:04.371: INFO: namespace kubectl-8746 deletion completed in 6.134986439s

• [SLOW TEST:6.290 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:36:04.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4433/configmap-test-1336a703-34c7-4635-a091-7ff3efb6cb94
STEP: Creating a pod to test consume configMaps
Dec 21 14:36:04.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301" in namespace "configmap-4433" to be "success or failure"
Dec 21 14:36:04.600: INFO: Pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301": Phase="Pending", Reason="", readiness=false. Elapsed: 9.758354ms
Dec 21 14:36:06.618: INFO: Pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027235968s
Dec 21 14:36:08.630: INFO: Pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039594034s
Dec 21 14:36:10.640: INFO: Pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049115445s
Dec 21 14:36:12.647: INFO: Pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056791764s
STEP: Saw pod success
Dec 21 14:36:12.647: INFO: Pod "pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301" satisfied condition "success or failure"
Dec 21 14:36:12.651: INFO: Trying to get logs from node iruya-node pod pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301 container env-test: 
STEP: delete the pod
Dec 21 14:36:12.729: INFO: Waiting for pod pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301 to disappear
Dec 21 14:36:12.752: INFO: Pod pod-configmaps-48c8f9d9-0e92-4239-a379-8bbe835e8301 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:36:12.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4433" for this suite.
Dec 21 14:36:18.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:36:18.988: INFO: namespace configmap-4433 deletion completed in 6.227115669s

• [SLOW TEST:14.617 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:36:18.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:36:19.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b" in namespace "downward-api-172" to be "success or failure"
Dec 21 14:36:19.228: INFO: Pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448306ms
Dec 21 14:36:21.241: INFO: Pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019400297s
Dec 21 14:36:23.255: INFO: Pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033443953s
Dec 21 14:36:25.272: INFO: Pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04958457s
Dec 21 14:36:27.281: INFO: Pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059293784s
STEP: Saw pod success
Dec 21 14:36:27.281: INFO: Pod "downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b" satisfied condition "success or failure"
Dec 21 14:36:27.286: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b container client-container: 
STEP: delete the pod
Dec 21 14:36:27.399: INFO: Waiting for pod downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b to disappear
Dec 21 14:36:27.408: INFO: Pod downwardapi-volume-52d5c488-0bde-46a9-bd68-e5ddb6226a6b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:36:27.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-172" for this suite.
Dec 21 14:36:33.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:36:33.597: INFO: namespace downward-api-172 deletion completed in 6.181912033s

• [SLOW TEST:14.607 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:36:33.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 21 14:36:33.795: INFO: Waiting up to 5m0s for pod "pod-8a958309-9072-4c8f-881c-79098b4133a0" in namespace "emptydir-6419" to be "success or failure"
Dec 21 14:36:33.809: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.87875ms
Dec 21 14:36:35.821: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025887371s
Dec 21 14:36:37.829: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034295906s
Dec 21 14:36:39.861: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066692292s
Dec 21 14:36:41.875: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080489191s
Dec 21 14:36:43.887: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092621912s
STEP: Saw pod success
Dec 21 14:36:43.888: INFO: Pod "pod-8a958309-9072-4c8f-881c-79098b4133a0" satisfied condition "success or failure"
Dec 21 14:36:43.895: INFO: Trying to get logs from node iruya-node pod pod-8a958309-9072-4c8f-881c-79098b4133a0 container test-container: 
STEP: delete the pod
Dec 21 14:36:44.184: INFO: Waiting for pod pod-8a958309-9072-4c8f-881c-79098b4133a0 to disappear
Dec 21 14:36:44.280: INFO: Pod pod-8a958309-9072-4c8f-881c-79098b4133a0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:36:44.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6419" for this suite.
Dec 21 14:36:50.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:36:50.479: INFO: namespace emptydir-6419 deletion completed in 6.180475729s

• [SLOW TEST:16.881 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:36:50.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 21 14:36:50.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7868'
Dec 21 14:36:52.803: INFO: stderr: ""
Dec 21 14:36:52.803: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 14:36:52.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7868'
Dec 21 14:36:53.111: INFO: stderr: ""
Dec 21 14:36:53.111: INFO: stdout: "update-demo-nautilus-8rthx update-demo-nautilus-vtplb "
Dec 21 14:36:53.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rthx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:36:53.182: INFO: stderr: ""
Dec 21 14:36:53.183: INFO: stdout: ""
Dec 21 14:36:53.183: INFO: update-demo-nautilus-8rthx is created but not running
Dec 21 14:36:58.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7868'
Dec 21 14:36:58.297: INFO: stderr: ""
Dec 21 14:36:58.297: INFO: stdout: "update-demo-nautilus-8rthx update-demo-nautilus-vtplb "
Dec 21 14:36:58.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rthx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:00.021: INFO: stderr: ""
Dec 21 14:37:00.021: INFO: stdout: ""
Dec 21 14:37:00.021: INFO: update-demo-nautilus-8rthx is created but not running
Dec 21 14:37:05.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7868'
Dec 21 14:37:05.168: INFO: stderr: ""
Dec 21 14:37:05.168: INFO: stdout: "update-demo-nautilus-8rthx update-demo-nautilus-vtplb "
Dec 21 14:37:05.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rthx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:05.269: INFO: stderr: ""
Dec 21 14:37:05.269: INFO: stdout: "true"
Dec 21 14:37:05.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rthx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:05.373: INFO: stderr: ""
Dec 21 14:37:05.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:37:05.373: INFO: validating pod update-demo-nautilus-8rthx
Dec 21 14:37:05.389: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:37:05.389: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:37:05.389: INFO: update-demo-nautilus-8rthx is verified up and running
Dec 21 14:37:05.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vtplb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:05.458: INFO: stderr: ""
Dec 21 14:37:05.458: INFO: stdout: "true"
Dec 21 14:37:05.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vtplb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:05.563: INFO: stderr: ""
Dec 21 14:37:05.563: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 14:37:05.563: INFO: validating pod update-demo-nautilus-vtplb
Dec 21 14:37:05.572: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 14:37:05.572: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 14:37:05.572: INFO: update-demo-nautilus-vtplb is verified up and running
STEP: rolling-update to new replication controller
Dec 21 14:37:05.574: INFO: scanned /root for discovery docs: 
Dec 21 14:37:05.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7868'
Dec 21 14:37:34.810: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 21 14:37:34.810: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 14:37:34.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7868'
Dec 21 14:37:34.939: INFO: stderr: ""
Dec 21 14:37:34.939: INFO: stdout: "update-demo-kitten-7mbhx update-demo-kitten-mmwhv "
Dec 21 14:37:34.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7mbhx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:35.056: INFO: stderr: ""
Dec 21 14:37:35.056: INFO: stdout: "true"
Dec 21 14:37:35.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7mbhx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:35.127: INFO: stderr: ""
Dec 21 14:37:35.128: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 21 14:37:35.128: INFO: validating pod update-demo-kitten-7mbhx
Dec 21 14:37:35.178: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 21 14:37:35.178: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 21 14:37:35.178: INFO: update-demo-kitten-7mbhx is verified up and running
Dec 21 14:37:35.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mmwhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:35.246: INFO: stderr: ""
Dec 21 14:37:35.246: INFO: stdout: "true"
Dec 21 14:37:35.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mmwhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7868'
Dec 21 14:37:35.393: INFO: stderr: ""
Dec 21 14:37:35.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 21 14:37:35.393: INFO: validating pod update-demo-kitten-mmwhv
Dec 21 14:37:35.417: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 21 14:37:35.417: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 21 14:37:35.417: INFO: update-demo-kitten-mmwhv is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:37:35.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7868" for this suite.
Dec 21 14:38:16.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:38:16.312: INFO: namespace kubectl-7868 deletion completed in 40.890410007s

• [SLOW TEST:85.832 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:38:16.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 21 14:38:16.471: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 21 14:38:21.479: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:38:22.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5641" for this suite.
Dec 21 14:38:30.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:38:30.657: INFO: namespace replication-controller-5641 deletion completed in 8.130314465s

• [SLOW TEST:14.344 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:38:30.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:38:44.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2727" for this suite.
Dec 21 14:38:50.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:38:50.780: INFO: namespace kubelet-test-2727 deletion completed in 6.122398s

• [SLOW TEST:20.123 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:38:50.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 21 14:38:51.043: INFO: Waiting up to 5m0s for pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576" in namespace "downward-api-8696" to be "success or failure"
Dec 21 14:38:51.063: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576": Phase="Pending", Reason="", readiness=false. Elapsed: 19.039005ms
Dec 21 14:38:53.070: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026690771s
Dec 21 14:38:55.095: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051619286s
Dec 21 14:38:57.104: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05999198s
Dec 21 14:38:59.114: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070658512s
Dec 21 14:39:01.122: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07839081s
STEP: Saw pod success
Dec 21 14:39:01.122: INFO: Pod "downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576" satisfied condition "success or failure"
Dec 21 14:39:01.127: INFO: Trying to get logs from node iruya-node pod downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576 container dapi-container: 
STEP: delete the pod
Dec 21 14:39:01.192: INFO: Waiting for pod downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576 to disappear
Dec 21 14:39:01.237: INFO: Pod downward-api-c8bcaa2b-145a-4c8d-859a-96acd88ed576 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:39:01.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8696" for this suite.
Dec 21 14:39:07.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:39:07.494: INFO: namespace downward-api-8696 deletion completed in 6.161874498s

• [SLOW TEST:16.713 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:39:07.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 21 14:39:07.632: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 21 14:39:07.641: INFO: Waiting for terminating namespaces to be deleted...
Dec 21 14:39:07.645: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 21 14:39:07.658: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 21 14:39:07.658: INFO: 	Container weave ready: true, restart count 0
Dec 21 14:39:07.658: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 14:39:07.658: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.658: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 14:39:07.658: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 21 14:39:07.675: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 21 14:39:07.675: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 14:39:07.675: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 21 14:39:07.675: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 21 14:39:07.675: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container coredns ready: true, restart count 0
Dec 21 14:39:07.675: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container etcd ready: true, restart count 0
Dec 21 14:39:07.675: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container weave ready: true, restart count 0
Dec 21 14:39:07.675: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 14:39:07.675: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 21 14:39:07.675: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 21 14:39:07.875: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 21 14:39:07.876: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 21 14:39:07.876: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046.15e269ccd33646c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8557/filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046.15e269cdf8fef3c2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046.15e269cedbff4be0], Reason = [Created], Message = [Created container filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046.15e269cefa5bdab2], Reason = [Started], Message = [Started container filler-pod-07fe9f52-77b6-4b9d-95f6-7d93d1237046]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093.15e269ccd28c5a57], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8557/filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093.15e269ce036cd627], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093.15e269ced87d983e], Reason = [Created], Message = [Created container filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093.15e269cf028c49f2], Reason = [Started], Message = [Started container filler-pod-b4b8e0d1-2d47-425c-a1ec-1dd581699093]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e269cfa0965c40], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:39:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8557" for this suite.
Dec 21 14:39:29.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:39:29.255: INFO: namespace sched-pred-8557 deletion completed in 8.159978113s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.760 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:39:29.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 21 14:39:30.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5155 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 21 14:39:39.825: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 21 14:39:39.825: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:39:41.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5155" for this suite.
Dec 21 14:39:47.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:39:47.999: INFO: namespace kubectl-5155 deletion completed in 6.149106482s

• [SLOW TEST:18.744 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:39:47.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9d74fa84-62bc-474c-a164-3f940a9de89a
STEP: Creating a pod to test consume secrets
Dec 21 14:39:48.299: INFO: Waiting up to 5m0s for pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31" in namespace "secrets-2175" to be "success or failure"
Dec 21 14:39:48.311: INFO: Pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31": Phase="Pending", Reason="", readiness=false. Elapsed: 11.689932ms
Dec 21 14:39:50.319: INFO: Pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019406534s
Dec 21 14:39:52.335: INFO: Pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035697548s
Dec 21 14:39:54.345: INFO: Pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046066593s
Dec 21 14:39:56.355: INFO: Pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05535418s
STEP: Saw pod success
Dec 21 14:39:56.355: INFO: Pod "pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31" satisfied condition "success or failure"
Dec 21 14:39:56.357: INFO: Trying to get logs from node iruya-node pod pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31 container secret-volume-test: 
STEP: delete the pod
Dec 21 14:39:56.427: INFO: Waiting for pod pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31 to disappear
Dec 21 14:39:56.435: INFO: Pod pod-secrets-eccd2680-adf9-4b62-a08a-cec7331d3d31 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:39:56.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2175" for this suite.
Dec 21 14:40:02.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:40:02.573: INFO: namespace secrets-2175 deletion completed in 6.132706478s
STEP: Destroying namespace "secret-namespace-8900" for this suite.
Dec 21 14:40:08.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:40:08.705: INFO: namespace secret-namespace-8900 deletion completed in 6.13192698s

• [SLOW TEST:20.706 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:40:08.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5119
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 14:40:08.816: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 14:40:47.153: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5119 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 14:40:47.154: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 14:40:48.764: INFO: Found all expected endpoints: [netserver-0]
Dec 21 14:40:48.774: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5119 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 14:40:48.774: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 14:40:50.075: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:40:50.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5119" for this suite.
Dec 21 14:41:14.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:41:14.700: INFO: namespace pod-network-test-5119 deletion completed in 24.138527422s

• [SLOW TEST:65.995 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:41:14.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:41:14.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0" in namespace "projected-5864" to be "success or failure"
Dec 21 14:41:14.891: INFO: Pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.116144ms
Dec 21 14:41:16.907: INFO: Pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024666326s
Dec 21 14:41:18.914: INFO: Pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031759577s
Dec 21 14:41:20.928: INFO: Pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046153061s
Dec 21 14:41:22.934: INFO: Pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051751636s
STEP: Saw pod success
Dec 21 14:41:22.934: INFO: Pod "downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0" satisfied condition "success or failure"
Dec 21 14:41:22.936: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0 container client-container: 
STEP: delete the pod
Dec 21 14:41:22.998: INFO: Waiting for pod downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0 to disappear
Dec 21 14:41:23.005: INFO: Pod downwardapi-volume-770cca06-9dc8-40ca-a61b-8518b3489bd0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:41:23.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5864" for this suite.
Dec 21 14:41:29.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:41:29.135: INFO: namespace projected-5864 deletion completed in 6.121836472s

• [SLOW TEST:14.435 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:41:29.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-43d4e531-e506-476f-9b85-15c73479b65d
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:41:39.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5833" for this suite.
Dec 21 14:42:01.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:42:01.565: INFO: namespace configmap-5833 deletion completed in 22.179569731s

• [SLOW TEST:32.430 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:42:01.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 21 14:42:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6248'
Dec 21 14:42:01.985: INFO: stderr: ""
Dec 21 14:42:01.985: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 21 14:42:02.996: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:02.996: INFO: Found 0 / 1
Dec 21 14:42:04.023: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:04.023: INFO: Found 0 / 1
Dec 21 14:42:04.998: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:04.998: INFO: Found 0 / 1
Dec 21 14:42:05.992: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:05.992: INFO: Found 0 / 1
Dec 21 14:42:06.993: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:06.993: INFO: Found 0 / 1
Dec 21 14:42:07.997: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:07.997: INFO: Found 0 / 1
Dec 21 14:42:08.992: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:08.992: INFO: Found 0 / 1
Dec 21 14:42:09.993: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:09.993: INFO: Found 1 / 1
Dec 21 14:42:09.993: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 21 14:42:09.998: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 14:42:09.998: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 21 14:42:09.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2b6vc redis-master --namespace=kubectl-6248'
Dec 21 14:42:10.346: INFO: stderr: ""
Dec 21 14:42:10.346: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Dec 14:42:08.742 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Dec 14:42:08.743 # Server started, Redis version 3.2.12\n1:M 21 Dec 14:42:08.743 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Dec 14:42:08.743 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 21 14:42:10.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2b6vc redis-master --namespace=kubectl-6248 --tail=1'
Dec 21 14:42:10.518: INFO: stderr: ""
Dec 21 14:42:10.518: INFO: stdout: "1:M 21 Dec 14:42:08.743 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 21 14:42:10.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2b6vc redis-master --namespace=kubectl-6248 --limit-bytes=1'
Dec 21 14:42:10.694: INFO: stderr: ""
Dec 21 14:42:10.694: INFO: stdout: " "
STEP: exposing timestamps
Dec 21 14:42:10.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2b6vc redis-master --namespace=kubectl-6248 --tail=1 --timestamps'
Dec 21 14:42:10.803: INFO: stderr: ""
Dec 21 14:42:10.803: INFO: stdout: "2019-12-21T14:42:08.744008262Z 1:M 21 Dec 14:42:08.743 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 21 14:42:13.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2b6vc redis-master --namespace=kubectl-6248 --since=1s'
Dec 21 14:42:13.519: INFO: stderr: ""
Dec 21 14:42:13.519: INFO: stdout: ""
Dec 21 14:42:13.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2b6vc redis-master --namespace=kubectl-6248 --since=24h'
Dec 21 14:42:13.625: INFO: stderr: ""
Dec 21 14:42:13.625: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Dec 14:42:08.742 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Dec 14:42:08.743 # Server started, Redis version 3.2.12\n1:M 21 Dec 14:42:08.743 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Dec 14:42:08.743 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 21 14:42:13.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6248'
Dec 21 14:42:13.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:42:13.715: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 21 14:42:13.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6248'
Dec 21 14:42:13.841: INFO: stderr: "No resources found.\n"
Dec 21 14:42:13.841: INFO: stdout: ""
Dec 21 14:42:13.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6248 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 14:42:14.053: INFO: stderr: ""
Dec 21 14:42:14.054: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:42:14.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6248" for this suite.
Dec 21 14:42:20.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:42:20.182: INFO: namespace kubectl-6248 deletion completed in 6.119765004s

• [SLOW TEST:18.616 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:42:20.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 21 14:42:20.246: INFO: PodSpec: initContainers in spec.initContainers
Dec 21 14:43:21.738: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-84aaeb86-6221-4471-b240-6918418bcab2", GenerateName:"", Namespace:"init-container-7997", SelfLink:"/api/v1/namespaces/init-container-7997/pods/pod-init-84aaeb86-6221-4471-b240-6918418bcab2", UID:"3a888502-eb95-4302-9831-bf4e9c9d358f", ResourceVersion:"17526366", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712536140, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"246778052"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fdlkv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c86240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fdlkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fdlkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fdlkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002eee298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029e4000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002eee330)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002eee350)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002eee358), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002eee35c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712536140, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712536140, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712536140, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712536140, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002d82dc0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ab4150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ab41c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://93eef41fef75e5a407c370a0b2eda3b9c34faabced7baf594a18c0e9978299c9"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d82fe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d82f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:43:21.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7997" for this suite.
Dec 21 14:43:43.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:43:44.003: INFO: namespace init-container-7997 deletion completed in 22.17146038s

• [SLOW TEST:83.821 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:43:44.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1221 14:43:54.139160       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 14:43:54.139: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:43:54.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8489" for this suite.
Dec 21 14:44:00.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:44:00.372: INFO: namespace gc-8489 deletion completed in 6.229806751s

• [SLOW TEST:16.368 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:44:00.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 14:44:00.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1237'
Dec 21 14:44:00.688: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 14:44:00.688: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 21 14:44:02.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1237'
Dec 21 14:44:02.968: INFO: stderr: ""
Dec 21 14:44:02.968: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:44:02.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1237" for this suite.
Dec 21 14:44:09.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:44:09.136: INFO: namespace kubectl-1237 deletion completed in 6.157101322s

• [SLOW TEST:8.764 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:44:09.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-abb475dd-8cb0-464b-8d61-f9558442b6ec
STEP: Creating a pod to test consume configMaps
Dec 21 14:44:09.204: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882" in namespace "configmap-301" to be "success or failure"
Dec 21 14:44:09.212: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882": Phase="Pending", Reason="", readiness=false. Elapsed: 7.234927ms
Dec 21 14:44:11.218: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013649376s
Dec 21 14:44:13.229: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025043473s
Dec 21 14:44:15.236: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032089385s
Dec 21 14:44:17.243: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03854537s
Dec 21 14:44:19.254: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049915102s
STEP: Saw pod success
Dec 21 14:44:19.254: INFO: Pod "pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882" satisfied condition "success or failure"
Dec 21 14:44:19.260: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882 container configmap-volume-test: 
STEP: delete the pod
Dec 21 14:44:19.530: INFO: Waiting for pod pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882 to disappear
Dec 21 14:44:19.554: INFO: Pod pod-configmaps-a9cb0d1c-8313-426f-a23d-f891fe010882 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:44:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-301" for this suite.
Dec 21 14:44:25.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:44:25.818: INFO: namespace configmap-301 deletion completed in 6.231383338s

• [SLOW TEST:16.682 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:44:25.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 21 14:44:25.928: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 21 14:44:25.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8951'
Dec 21 14:44:26.369: INFO: stderr: ""
Dec 21 14:44:26.370: INFO: stdout: "service/redis-slave created\n"
Dec 21 14:44:26.370: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 21 14:44:26.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8951'
Dec 21 14:44:26.863: INFO: stderr: ""
Dec 21 14:44:26.863: INFO: stdout: "service/redis-master created\n"
Dec 21 14:44:26.864: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 21 14:44:26.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8951'
Dec 21 14:44:27.195: INFO: stderr: ""
Dec 21 14:44:27.195: INFO: stdout: "service/frontend created\n"
Dec 21 14:44:27.195: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 21 14:44:27.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8951'
Dec 21 14:44:27.752: INFO: stderr: ""
Dec 21 14:44:27.752: INFO: stdout: "deployment.apps/frontend created\n"
Dec 21 14:44:27.753: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 21 14:44:27.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8951'
Dec 21 14:44:28.508: INFO: stderr: ""
Dec 21 14:44:28.508: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 21 14:44:28.508: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 21 14:44:28.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8951'
Dec 21 14:44:29.808: INFO: stderr: ""
Dec 21 14:44:29.808: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 21 14:44:29.808: INFO: Waiting for all frontend pods to be Running.
Dec 21 14:44:54.860: INFO: Waiting for frontend to serve content.
Dec 21 14:44:54.943: INFO: Trying to add a new entry to the guestbook.
Dec 21 14:44:54.991: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 21 14:44:55.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8951'
Dec 21 14:44:55.246: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:44:55.246: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 21 14:44:55.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8951'
Dec 21 14:44:55.460: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:44:55.461: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 21 14:44:55.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8951'
Dec 21 14:44:55.867: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:44:55.867: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 21 14:44:55.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8951'
Dec 21 14:44:56.047: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:44:56.047: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 21 14:44:56.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8951'
Dec 21 14:44:56.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:44:56.147: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 21 14:44:56.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8951'
Dec 21 14:44:56.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 14:44:56.264: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:44:56.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8951" for this suite.
Dec 21 14:45:36.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:45:36.570: INFO: namespace kubectl-8951 deletion completed in 40.272122787s

• [SLOW TEST:70.749 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:45:36.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1221 14:45:37.835171       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 14:45:37.835: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:45:37.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5976" for this suite.
Dec 21 14:45:43.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:45:44.025: INFO: namespace gc-5976 deletion completed in 6.182712147s

• [SLOW TEST:7.456 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:45:44.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-4742b8bb-0bbd-4aac-af42-05d1e9a24160
STEP: Creating a pod to test consume configMaps
Dec 21 14:45:44.153: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d" in namespace "projected-8348" to be "success or failure"
Dec 21 14:45:44.173: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.938001ms
Dec 21 14:45:46.184: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03127313s
Dec 21 14:45:48.196: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043348069s
Dec 21 14:45:50.237: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084664436s
Dec 21 14:45:52.247: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094273324s
Dec 21 14:45:54.257: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10388437s
STEP: Saw pod success
Dec 21 14:45:54.257: INFO: Pod "pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d" satisfied condition "success or failure"
Dec 21 14:45:54.261: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 14:45:54.332: INFO: Waiting for pod pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d to disappear
Dec 21 14:45:54.343: INFO: Pod pod-projected-configmaps-7e8d9f74-ff4b-412d-9a57-5989dd70a89d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:45:54.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8348" for this suite.
Dec 21 14:46:00.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:46:00.590: INFO: namespace projected-8348 deletion completed in 6.239217084s

• [SLOW TEST:16.564 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:46:00.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5932.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5932.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 14:46:12.790: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.798: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.805: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.813: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.819: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.828: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.838: INFO: Unable to read jessie_udp@PodARecord from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.845: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0: the server could not find the requested resource (get pods dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0)
Dec 21 14:46:12.845: INFO: Lookups using dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 21 14:46:18.034: INFO: DNS probes using dns-5932/dns-test-65df0a0d-032a-4fc4-97c2-3ad7136045a0 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:46:18.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5932" for this suite.
Dec 21 14:46:24.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:46:24.309: INFO: namespace dns-5932 deletion completed in 6.138978506s

• [SLOW TEST:23.719 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:46:24.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 21 14:46:24.408: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:46:24.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6010" for this suite.
Dec 21 14:46:30.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:46:30.731: INFO: namespace kubectl-6010 deletion completed in 6.17296747s

• [SLOW TEST:6.422 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:46:30.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 14:46:30.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271" in namespace "downward-api-2287" to be "success or failure"
Dec 21 14:46:30.901: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271": Phase="Pending", Reason="", readiness=false. Elapsed: 70.831218ms
Dec 21 14:46:32.910: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080588676s
Dec 21 14:46:34.928: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098552623s
Dec 21 14:46:36.936: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105773349s
Dec 21 14:46:38.993: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163480739s
Dec 21 14:46:41.000: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170523625s
STEP: Saw pod success
Dec 21 14:46:41.000: INFO: Pod "downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271" satisfied condition "success or failure"
Dec 21 14:46:41.004: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271 container client-container: 
STEP: delete the pod
Dec 21 14:46:41.214: INFO: Waiting for pod downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271 to disappear
Dec 21 14:46:41.228: INFO: Pod downwardapi-volume-b51309db-50fc-4535-ab08-7f5a534b2271 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:46:41.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2287" for this suite.
Dec 21 14:46:47.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:46:47.396: INFO: namespace downward-api-2287 deletion completed in 6.159683543s

• [SLOW TEST:16.664 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:46:47.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-18adab29-8b40-426c-ad84-4b99b55d683c
STEP: Creating secret with name s-test-opt-upd-598b7077-2c9c-42f5-a9b2-f87169b5714b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-18adab29-8b40-426c-ad84-4b99b55d683c
STEP: Updating secret s-test-opt-upd-598b7077-2c9c-42f5-a9b2-f87169b5714b
STEP: Creating secret with name s-test-opt-create-5096d5e0-a0ed-4110-ae47-65066a131dfd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:48:20.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-408" for this suite.
Dec 21 14:48:42.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:48:42.769: INFO: namespace projected-408 deletion completed in 22.171844669s

• [SLOW TEST:115.372 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:48:42.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4274
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-4274
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4274
Dec 21 14:48:42.868: INFO: Found 0 stateful pods, waiting for 1
Dec 21 14:48:52.880: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 21 14:48:52.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 14:48:55.214: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 14:48:55.214: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 14:48:55.214: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 14:48:55.223: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 21 14:49:05.248: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 14:49:05.248: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 14:49:05.272: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 21 14:49:05.272: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:05.272: INFO: 
Dec 21 14:49:05.272: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 21 14:49:07.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989012845s
Dec 21 14:49:08.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.168078325s
Dec 21 14:49:09.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.160504422s
Dec 21 14:49:10.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.153383772s
Dec 21 14:49:12.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.146187341s
Dec 21 14:49:13.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.72351884s
Dec 21 14:49:14.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 708.722248ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4274
Dec 21 14:49:15.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:49:16.129: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 21 14:49:16.130: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 14:49:16.130: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 14:49:16.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:49:16.536: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 21 14:49:16.536: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 14:49:16.536: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 14:49:16.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:49:17.077: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 21 14:49:17.077: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 14:49:17.077: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 14:49:17.086: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 14:49:17.086: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 14:49:17.086: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 21 14:49:27.093: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 14:49:27.093: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 14:49:27.093: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 21 14:49:27.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 14:49:27.626: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 14:49:27.627: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 14:49:27.627: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 14:49:27.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 14:49:28.101: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 14:49:28.101: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 14:49:28.101: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 14:49:28.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 14:49:28.693: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 21 14:49:28.693: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 14:49:28.693: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 14:49:28.693: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 14:49:28.702: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 21 14:49:38.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 14:49:38.758: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 14:49:38.758: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 14:49:38.785: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:38.785: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:38.785: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:38.785: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:38.785: INFO: 
Dec 21 14:49:38.785: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:40.507: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:40.507: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:40.507: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:40.507: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:40.507: INFO: 
Dec 21 14:49:40.507: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:41.522: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:41.522: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:41.522: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:41.522: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:41.522: INFO: 
Dec 21 14:49:41.522: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:42.537: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:42.537: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:42.537: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:42.537: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:42.537: INFO: 
Dec 21 14:49:42.537: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:43.610: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:43.610: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:43.610: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:43.610: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:43.610: INFO: 
Dec 21 14:49:43.610: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:44.622: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:44.622: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:44.622: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:44.622: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:44.622: INFO: 
Dec 21 14:49:44.622: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:45.634: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 21 14:49:45.634: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:45.634: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:45.634: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:45.634: INFO: 
Dec 21 14:49:45.634: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 14:49:46.651: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 21 14:49:46.651: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:46.651: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:46.651: INFO: 
Dec 21 14:49:46.651: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 21 14:49:47.660: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 21 14:49:47.660: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:47.660: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:47.660: INFO: 
Dec 21 14:49:47.660: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 21 14:49:48.689: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 21 14:49:48.689: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:48:42 +0000 UTC  }]
Dec 21 14:49:48.689: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 14:49:05 +0000 UTC  }]
Dec 21 14:49:48.689: INFO: 
Dec 21 14:49:48.689: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4274
Dec 21 14:49:49.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:49:49.990: INFO: rc: 1
Dec 21 14:49:49.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0024b3d70 exit status 1   true [0xc003088408 0xc003088448 0xc003088460] [0xc003088408 0xc003088448 0xc003088460] [0xc003088430 0xc003088458] [0xba6c50 0xba6c50] 0xc002679f20 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 21 14:49:59.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:50:00.099: INFO: rc: 1
Dec 21 14:50:00.100: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024b3e60 exit status 1   true [0xc0030884a8 0xc0030884f0 0xc003088570] [0xc0030884a8 0xc0030884f0 0xc003088570] [0xc0030884d0 0xc003088540] [0xba6c50 0xba6c50] 0xc001678d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:50:10.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:50:10.266: INFO: rc: 1
Dec 21 14:50:10.266: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00222edb0 exit status 1   true [0xc00035a608 0xc00035a620 0xc00035a640] [0xc00035a608 0xc00035a620 0xc00035a640] [0xc00035a618 0xc00035a630] [0xba6c50 0xba6c50] 0xc002da2fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:50:20.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:50:20.479: INFO: rc: 1
Dec 21 14:50:20.480: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00222eea0 exit status 1   true [0xc00035a648 0xc00035a680 0xc00035a698] [0xc00035a648 0xc00035a680 0xc00035a698] [0xc00035a670 0xc00035a690] [0xba6c50 0xba6c50] 0xc002da3380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:50:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:50:30.623: INFO: rc: 1
Dec 21 14:50:30.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024b3f50 exit status 1   true [0xc0030885c0 0xc003088628 0xc003088668] [0xc0030885c0 0xc003088628 0xc003088668] [0xc003088618 0xc003088660] [0xba6c50 0xba6c50] 0xc001679140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:50:40.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:50:40.741: INFO: rc: 1
Dec 21 14:50:40.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002463ef0 exit status 1   true [0xc002912128 0xc002912140 0xc002912158] [0xc002912128 0xc002912140 0xc002912158] [0xc002912138 0xc002912150] [0xba6c50 0xba6c50] 0xc001d17740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:50:50.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:50:50.992: INFO: rc: 1
Dec 21 14:50:50.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00250fa70 exit status 1   true [0xc000d59978 0xc000d59a18 0xc000d59a50] [0xc000d59978 0xc000d59a18 0xc000d59a50] [0xc000d599f0 0xc000d59a48] [0xba6c50 0xba6c50] 0xc003376c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:51:00.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:51:01.183: INFO: rc: 1
Dec 21 14:51:01.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00250fbf0 exit status 1   true [0xc000d59a70 0xc000d59b18 0xc000d59b58] [0xc000d59a70 0xc000d59b18 0xc000d59b58] [0xc000d59b00 0xc000d59b48] [0xba6c50 0xba6c50] 0xc003376fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:51:11.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:51:11.381: INFO: rc: 1
Dec 21 14:51:11.381: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f04090 exit status 1   true [0xc002912000 0xc002912018 0xc002912030] [0xc002912000 0xc002912018 0xc002912030] [0xc002912010 0xc002912028] [0xba6c50 0xba6c50] 0xc0024405a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:51:21.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:51:21.535: INFO: rc: 1
Dec 21 14:51:21.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028aa090 exit status 1   true [0xc00035a078 0xc00035a168 0xc00035a2c0] [0xc00035a078 0xc00035a168 0xc00035a2c0] [0xc00035a158 0xc00035a2a8] [0xba6c50 0xba6c50] 0xc0026782a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:51:31.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:51:31.708: INFO: rc: 1
Dec 21 14:51:31.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f04180 exit status 1   true [0xc002912038 0xc002912050 0xc002912068] [0xc002912038 0xc002912050 0xc002912068] [0xc002912048 0xc002912060] [0xba6c50 0xba6c50] 0xc002440d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:51:41.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:51:41.880: INFO: rc: 1
Dec 21 14:51:41.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028aa150 exit status 1   true [0xc00035a2d0 0xc00035a310 0xc00035a340] [0xc00035a2d0 0xc00035a310 0xc00035a340] [0xc00035a300 0xc00035a338] [0xba6c50 0xba6c50] 0xc002678840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:51:51.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:51:52.054: INFO: rc: 1
Dec 21 14:51:52.054: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b6090 exit status 1   true [0xc003088008 0xc003088048 0xc003088070] [0xc003088008 0xc003088048 0xc003088070] [0xc003088040 0xc003088068] [0xba6c50 0xba6c50] 0xc002027920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:52:02.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:52:02.240: INFO: rc: 1
Dec 21 14:52:02.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f04270 exit status 1   true [0xc002912070 0xc002912088 0xc0029120a0] [0xc002912070 0xc002912088 0xc0029120a0] [0xc002912080 0xc002912098] [0xba6c50 0xba6c50] 0xc002441620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:52:12.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:52:12.383: INFO: rc: 1
Dec 21 14:52:12.383: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b6180 exit status 1   true [0xc003088090 0xc0030880b8 0xc0030880f8] [0xc003088090 0xc0030880b8 0xc0030880f8] [0xc0030880b0 0xc0030880e0] [0xba6c50 0xba6c50] 0xc0014964e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:52:22.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:52:22.572: INFO: rc: 1
Dec 21 14:52:22.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028aa270 exit status 1   true [0xc00035a348 0xc00035a360 0xc00035a378] [0xc00035a348 0xc00035a360 0xc00035a378] [0xc00035a358 0xc00035a370] [0xba6c50 0xba6c50] 0xc002678c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:52:32.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:52:32.735: INFO: rc: 1
Dec 21 14:52:32.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028aa330 exit status 1   true [0xc00035a380 0xc00035a3d0 0xc00035a3e8] [0xc00035a380 0xc00035a3d0 0xc00035a3e8] [0xc00035a398 0xc00035a3e0] [0xba6c50 0xba6c50] 0xc002678f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:52:42.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:52:42.824: INFO: rc: 1
Dec 21 14:52:42.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028aa420 exit status 1   true [0xc00035a3f8 0xc00035a418 0xc00035a430] [0xc00035a3f8 0xc00035a418 0xc00035a430] [0xc00035a410 0xc00035a428] [0xba6c50 0xba6c50] 0xc0026793e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:52:52.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:52:52.999: INFO: rc: 1
Dec 21 14:52:52.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f04330 exit status 1   true [0xc0029120a8 0xc0029120c0 0xc0029120d8] [0xc0029120a8 0xc0029120c0 0xc0029120d8] [0xc0029120b8 0xc0029120d0] [0xba6c50 0xba6c50] 0xc002441d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:53:03.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:53:03.175: INFO: rc: 1
Dec 21 14:53:03.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b62a0 exit status 1   true [0xc003088100 0xc003088130 0xc003088158] [0xc003088100 0xc003088130 0xc003088158] [0xc003088110 0xc003088150] [0xba6c50 0xba6c50] 0xc001b7e660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:53:13.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:53:13.376: INFO: rc: 1
Dec 21 14:53:13.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b60c0 exit status 1   true [0xc003088008 0xc003088048 0xc003088070] [0xc003088008 0xc003088048 0xc003088070] [0xc003088040 0xc003088068] [0xba6c50 0xba6c50] 0xc0023f4f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:53:23.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:53:23.537: INFO: rc: 1
Dec 21 14:53:23.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002898090 exit status 1   true [0xc00035a078 0xc00035a168 0xc00035a2c0] [0xc00035a078 0xc00035a168 0xc00035a2c0] [0xc00035a158 0xc00035a2a8] [0xba6c50 0xba6c50] 0xc001b7e900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:53:33.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:53:33.743: INFO: rc: 1
Dec 21 14:53:33.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f040c0 exit status 1   true [0xc002912000 0xc002912018 0xc002912030] [0xc002912000 0xc002912018 0xc002912030] [0xc002912010 0xc002912028] [0xba6c50 0xba6c50] 0xc002027920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:53:43.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:53:43.904: INFO: rc: 1
Dec 21 14:53:43.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f041b0 exit status 1   true [0xc002912038 0xc002912050 0xc002912068] [0xc002912038 0xc002912050 0xc002912068] [0xc002912048 0xc002912060] [0xba6c50 0xba6c50] 0xc0026782a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:53:53.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:53:54.078: INFO: rc: 1
Dec 21 14:53:54.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b6210 exit status 1   true [0xc003088090 0xc0030880b8 0xc0030880f8] [0xc003088090 0xc0030880b8 0xc0030880f8] [0xc0030880b0 0xc0030880e0] [0xba6c50 0xba6c50] 0xc002440480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:54:04.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:54:04.175: INFO: rc: 1
Dec 21 14:54:04.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f042d0 exit status 1   true [0xc002912070 0xc002912088 0xc0029120a0] [0xc002912070 0xc002912088 0xc0029120a0] [0xc002912080 0xc002912098] [0xba6c50 0xba6c50] 0xc002678840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:54:14.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:54:14.327: INFO: rc: 1
Dec 21 14:54:14.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002898180 exit status 1   true [0xc00035a2d0 0xc00035a310 0xc00035a340] [0xc00035a2d0 0xc00035a310 0xc00035a340] [0xc00035a300 0xc00035a338] [0xba6c50 0xba6c50] 0xc001b7f1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:54:24.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:54:24.475: INFO: rc: 1
Dec 21 14:54:24.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b6360 exit status 1   true [0xc003088100 0xc003088130 0xc003088158] [0xc003088100 0xc003088130 0xc003088158] [0xc003088110 0xc003088150] [0xba6c50 0xba6c50] 0xc002440ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:54:34.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:54:34.784: INFO: rc: 1
Dec 21 14:54:34.784: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b6630 exit status 1   true [0xc003088160 0xc0030881a0 0xc0030881f0] [0xc003088160 0xc0030881a0 0xc0030881f0] [0xc003088198 0xc0030881d8] [0xba6c50 0xba6c50] 0xc002441440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:54:44.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:54:44.986: INFO: rc: 1
Dec 21 14:54:44.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028aa0f0 exit status 1   true [0xc000d586f8 0xc000d58ca0 0xc000d58e98] [0xc000d586f8 0xc000d58ca0 0xc000d58e98] [0xc000d58c70 0xc000d58e10] [0xba6c50 0xba6c50] 0xc002da2060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 21 14:54:54.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 14:54:55.189: INFO: rc: 1
Dec 21 14:54:55.189: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 21 14:54:55.189: INFO: Scaling statefulset ss to 0
Dec 21 14:54:55.198: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 21 14:54:55.200: INFO: Deleting all statefulset in ns statefulset-4274
Dec 21 14:54:55.202: INFO: Scaling statefulset ss to 0
Dec 21 14:54:55.208: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 14:54:55.210: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:54:55.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4274" for this suite.
Dec 21 14:55:01.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:55:01.433: INFO: namespace statefulset-4274 deletion completed in 6.200965283s

• [SLOW TEST:378.663 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:55:01.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6855
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6855
STEP: Deleting pre-stop pod
Dec 21 14:55:22.743: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:55:22.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6855" for this suite.
Dec 21 14:56:00.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:56:00.881: INFO: namespace prestop-6855 deletion completed in 38.109304809s

• [SLOW TEST:59.448 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:56:00.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 14:56:00.984: INFO: Creating ReplicaSet my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9
Dec 21 14:56:01.003: INFO: Pod name my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9: Found 0 pods out of 1
Dec 21 14:56:06.015: INFO: Pod name my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9: Found 1 pods out of 1
Dec 21 14:56:06.015: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9" is running
Dec 21 14:56:10.026: INFO: Pod "my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9-k26df" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 14:56:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 14:56:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 14:56:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 14:56:01 +0000 UTC Reason: Message:}])
Dec 21 14:56:10.026: INFO: Trying to dial the pod
Dec 21 14:56:15.056: INFO: Controller my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9: Got expected result from replica 1 [my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9-k26df]: "my-hostname-basic-e73aef26-784d-4c88-bf2f-5ac3286eefe9-k26df", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:56:15.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8324" for this suite.
Dec 21 14:56:21.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:56:21.199: INFO: namespace replicaset-8324 deletion completed in 6.134783536s

• [SLOW TEST:20.317 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:56:21.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9040/configmap-test-c5041544-81a1-4046-850a-dd7e3620721e
STEP: Creating a pod to test consume configMaps
Dec 21 14:56:21.315: INFO: Waiting up to 5m0s for pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a" in namespace "configmap-9040" to be "success or failure"
Dec 21 14:56:21.325: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.308244ms
Dec 21 14:56:23.342: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026810091s
Dec 21 14:56:25.351: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035685657s
Dec 21 14:56:27.361: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045965374s
Dec 21 14:56:29.386: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071358992s
Dec 21 14:56:31.430: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114831009s
STEP: Saw pod success
Dec 21 14:56:31.430: INFO: Pod "pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a" satisfied condition "success or failure"
Dec 21 14:56:31.436: INFO: Trying to get logs from node iruya-node pod pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a container env-test: 
STEP: delete the pod
Dec 21 14:56:31.538: INFO: Waiting for pod pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a to disappear
Dec 21 14:56:31.547: INFO: Pod pod-configmaps-19281e30-042d-4b95-bc2b-fbf86023d46a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:56:31.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9040" for this suite.
Dec 21 14:56:37.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:56:37.735: INFO: namespace configmap-9040 deletion completed in 6.182040183s

• [SLOW TEST:16.536 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:56:37.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1601 to expose endpoints map[]
Dec 21 14:56:38.042: INFO: Get endpoints failed (82.788394ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 21 14:56:39.048: INFO: successfully validated that service multi-endpoint-test in namespace services-1601 exposes endpoints map[] (1.087992381s elapsed)
STEP: Creating pod pod1 in namespace services-1601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1601 to expose endpoints map[pod1:[100]]
Dec 21 14:56:43.227: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.170618559s elapsed, will retry)
Dec 21 14:56:46.270: INFO: successfully validated that service multi-endpoint-test in namespace services-1601 exposes endpoints map[pod1:[100]] (7.213370928s elapsed)
STEP: Creating pod pod2 in namespace services-1601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1601 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 21 14:56:51.819: INFO: Unexpected endpoints: found map[ee9f07ff-eed6-4020-b4e8-0036dcae430f:[100]], expected map[pod1:[100] pod2:[101]] (5.541709761s elapsed, will retry)
Dec 21 14:56:53.852: INFO: successfully validated that service multi-endpoint-test in namespace services-1601 exposes endpoints map[pod1:[100] pod2:[101]] (7.575139676s elapsed)
STEP: Deleting pod pod1 in namespace services-1601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1601 to expose endpoints map[pod2:[101]]
Dec 21 14:56:53.996: INFO: successfully validated that service multi-endpoint-test in namespace services-1601 exposes endpoints map[pod2:[101]] (114.115613ms elapsed)
STEP: Deleting pod pod2 in namespace services-1601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1601 to expose endpoints map[]
Dec 21 14:56:55.069: INFO: successfully validated that service multi-endpoint-test in namespace services-1601 exposes endpoints map[] (1.053551877s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:56:55.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1601" for this suite.
Dec 21 14:57:17.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:57:17.401: INFO: namespace services-1601 deletion completed in 22.252968336s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.665 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:57:17.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1221 14:57:47.631632       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 14:57:47.631: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:57:47.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7300" for this suite.
Dec 21 14:57:53.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:57:53.760: INFO: namespace gc-7300 deletion completed in 6.122653291s

• [SLOW TEST:36.358 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:57:53.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-315d0a23-d99e-4f40-800c-1bb77baa3f79
STEP: Creating a pod to test consume secrets
Dec 21 14:57:55.347: INFO: Waiting up to 5m0s for pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a" in namespace "secrets-1040" to be "success or failure"
Dec 21 14:57:55.355: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.181939ms
Dec 21 14:57:57.611: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263018001s
Dec 21 14:57:59.625: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277411813s
Dec 21 14:58:01.636: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28818028s
Dec 21 14:58:03.644: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296769316s
Dec 21 14:58:05.651: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303371871s
STEP: Saw pod success
Dec 21 14:58:05.651: INFO: Pod "pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a" satisfied condition "success or failure"
Dec 21 14:58:05.655: INFO: Trying to get logs from node iruya-node pod pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a container secret-volume-test: 
STEP: delete the pod
Dec 21 14:58:05.744: INFO: Waiting for pod pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a to disappear
Dec 21 14:58:05.796: INFO: Pod pod-secrets-dfbb0941-9c9e-4435-aefa-35235e376f7a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:58:05.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1040" for this suite.
Dec 21 14:58:11.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:58:11.985: INFO: namespace secrets-1040 deletion completed in 6.174394287s

• [SLOW TEST:18.225 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:58:11.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 14:58:12.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4695'
Dec 21 14:58:12.328: INFO: stderr: ""
Dec 21 14:58:12.328: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 21 14:58:22.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4695 -o json'
Dec 21 14:58:22.512: INFO: stderr: ""
Dec 21 14:58:22.512: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-21T14:58:12Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-4695\",\n        \"resourceVersion\": \"17528436\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4695/pods/e2e-test-nginx-pod\",\n        \"uid\": \"ca5a7c1d-a823-48ba-b931-b900f6a9856f\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-xgbrp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-xgbrp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-xgbrp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T14:58:12Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T14:58:19Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T14:58:19Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T14:58:12Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://2453a1a35e2b5ad1c65474c03a189a84634179db6d3b14daaa107fed7e65b005\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-21T14:58:18Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-21T14:58:12Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 21 14:58:22.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4695'
Dec 21 14:58:22.781: INFO: stderr: ""
Dec 21 14:58:22.782: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 21 14:58:22.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4695'
Dec 21 14:58:29.711: INFO: stderr: ""
Dec 21 14:58:29.712: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:58:29.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4695" for this suite.
Dec 21 14:58:35.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:58:35.958: INFO: namespace kubectl-4695 deletion completed in 6.16431139s

• [SLOW TEST:23.973 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:58:35.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1221 14:59:17.926998       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 14:59:17.927: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:59:17.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5542" for this suite.
Dec 21 14:59:32.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 14:59:33.467: INFO: namespace gc-5542 deletion completed in 15.534336569s

• [SLOW TEST:57.509 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 14:59:33.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9469.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 215.117.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.117.215_udp@PTR;check="$$(dig +tcp +noall +answer +search 215.117.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.117.215_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9469.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 215.117.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.117.215_udp@PTR;check="$$(dig +tcp +noall +answer +search 215.117.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.117.215_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 14:59:53.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.208: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.215: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.220: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.227: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.231: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.242: INFO: Unable to read 10.109.117.215_udp@PTR from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.247: INFO: Unable to read 10.109.117.215_tcp@PTR from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.252: INFO: Unable to read jessie_udp@dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.259: INFO: Unable to read jessie_tcp@dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.282: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.288: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9469.svc.cluster.local from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.298: INFO: Unable to read jessie_udp@PodARecord from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.304: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.311: INFO: Unable to read 10.109.117.215_udp@PTR from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.315: INFO: Unable to read 10.109.117.215_tcp@PTR from pod dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb: the server could not find the requested resource (get pods dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb)
Dec 21 14:59:53.315: INFO: Lookups using dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb failed for: [wheezy_udp@dns-test-service.dns-9469.svc.cluster.local wheezy_tcp@dns-test-service.dns-9469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9469.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9469.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.117.215_udp@PTR 10.109.117.215_tcp@PTR jessie_udp@dns-test-service.dns-9469.svc.cluster.local jessie_tcp@dns-test-service.dns-9469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9469.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9469.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9469.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.117.215_udp@PTR 10.109.117.215_tcp@PTR]

Dec 21 14:59:58.449: INFO: DNS probes using dns-9469/dns-test-54ac06af-65d7-4cf6-8446-6e15961304eb succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 14:59:58.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9469" for this suite.
Dec 21 15:00:04.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:00:04.812: INFO: namespace dns-9469 deletion completed in 6.174990546s

• [SLOW TEST:31.343 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:00:04.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:00:11.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6197" for this suite.
Dec 21 15:00:17.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:00:17.528: INFO: namespace namespaces-6197 deletion completed in 6.168988756s
STEP: Destroying namespace "nsdeletetest-9036" for this suite.
Dec 21 15:00:17.532: INFO: Namespace nsdeletetest-9036 was already deleted
STEP: Destroying namespace "nsdeletetest-2546" for this suite.
Dec 21 15:00:23.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:00:23.860: INFO: namespace nsdeletetest-2546 deletion completed in 6.327652667s

• [SLOW TEST:19.048 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:00:23.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 21 15:00:24.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3538,SelfLink:/api/v1/namespaces/watch-3538/configmaps/e2e-watch-test-label-changed,UID:16c4b671-4c67-46cf-b960-37b0993e00f7,ResourceVersion:17528887,Generation:0,CreationTimestamp:2019-12-21 15:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 21 15:00:24.101: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3538,SelfLink:/api/v1/namespaces/watch-3538/configmaps/e2e-watch-test-label-changed,UID:16c4b671-4c67-46cf-b960-37b0993e00f7,ResourceVersion:17528888,Generation:0,CreationTimestamp:2019-12-21 15:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 21 15:00:24.101: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3538,SelfLink:/api/v1/namespaces/watch-3538/configmaps/e2e-watch-test-label-changed,UID:16c4b671-4c67-46cf-b960-37b0993e00f7,ResourceVersion:17528889,Generation:0,CreationTimestamp:2019-12-21 15:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 21 15:00:34.158: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3538,SelfLink:/api/v1/namespaces/watch-3538/configmaps/e2e-watch-test-label-changed,UID:16c4b671-4c67-46cf-b960-37b0993e00f7,ResourceVersion:17528904,Generation:0,CreationTimestamp:2019-12-21 15:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 15:00:34.158: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3538,SelfLink:/api/v1/namespaces/watch-3538/configmaps/e2e-watch-test-label-changed,UID:16c4b671-4c67-46cf-b960-37b0993e00f7,ResourceVersion:17528905,Generation:0,CreationTimestamp:2019-12-21 15:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 21 15:00:34.158: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3538,SelfLink:/api/v1/namespaces/watch-3538/configmaps/e2e-watch-test-label-changed,UID:16c4b671-4c67-46cf-b960-37b0993e00f7,ResourceVersion:17528906,Generation:0,CreationTimestamp:2019-12-21 15:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:00:34.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3538" for this suite.
Dec 21 15:00:40.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:00:40.290: INFO: namespace watch-3538 deletion completed in 6.122160448s

• [SLOW TEST:16.430 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:00:40.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 21 15:00:49.024: INFO: Successfully updated pod "labelsupdatee914e7d2-ebe0-4daa-a059-74d81021b122"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:00:51.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4061" for this suite.
Dec 21 15:01:13.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:01:13.265: INFO: namespace projected-4061 deletion completed in 22.143381152s

• [SLOW TEST:32.974 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:01:13.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 15:01:13.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d" in namespace "projected-8029" to be "success or failure"
Dec 21 15:01:13.353: INFO: Pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292045ms
Dec 21 15:01:15.360: INFO: Pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015505321s
Dec 21 15:01:17.371: INFO: Pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026312839s
Dec 21 15:01:19.378: INFO: Pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033690816s
Dec 21 15:01:21.447: INFO: Pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10229487s
STEP: Saw pod success
Dec 21 15:01:21.447: INFO: Pod "downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d" satisfied condition "success or failure"
Dec 21 15:01:21.454: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d container client-container: 
STEP: delete the pod
Dec 21 15:01:21.517: INFO: Waiting for pod downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d to disappear
Dec 21 15:01:21.527: INFO: Pod downwardapi-volume-b4b6d7e0-7099-4d99-a643-03dff1d95a0d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:01:21.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8029" for this suite.
Dec 21 15:01:27.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:01:27.742: INFO: namespace projected-8029 deletion completed in 6.20927311s

• [SLOW TEST:14.477 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:01:27.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 21 15:01:27.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 21 15:01:29.750: INFO: stderr: ""
Dec 21 15:01:29.750: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:01:29.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4506" for this suite.
Dec 21 15:01:35.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:01:35.951: INFO: namespace kubectl-4506 deletion completed in 6.193185885s

• [SLOW TEST:8.209 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:01:35.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 15:01:36.179: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3f729540-b0c2-4e66-884c-6b5541026b82", Controller:(*bool)(0xc002cac59a), BlockOwnerDeletion:(*bool)(0xc002cac59b)}}
Dec 21 15:01:36.229: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d8579e6a-5cd4-4018-af0f-f4908e835c9c", Controller:(*bool)(0xc001d396ba), BlockOwnerDeletion:(*bool)(0xc001d396bb)}}
Dec 21 15:01:36.245: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f1cde8ca-a154-46d5-8949-8f7586ae9d51", Controller:(*bool)(0xc002cac96a), BlockOwnerDeletion:(*bool)(0xc002cac96b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:01:41.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3001" for this suite.
Dec 21 15:01:47.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:01:47.500: INFO: namespace gc-3001 deletion completed in 6.165427707s

• [SLOW TEST:11.549 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:01:47.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:01:47.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1080" for this suite.
Dec 21 15:01:53.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:01:53.906: INFO: namespace services-1080 deletion completed in 6.26270795s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.406 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:01:53.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 21 15:01:54.069: INFO: Waiting up to 5m0s for pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332" in namespace "containers-497" to be "success or failure"
Dec 21 15:01:54.077: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332": Phase="Pending", Reason="", readiness=false. Elapsed: 8.907285ms
Dec 21 15:01:56.084: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015749389s
Dec 21 15:01:58.100: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031751136s
Dec 21 15:02:00.109: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040296822s
Dec 21 15:02:02.116: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047201772s
Dec 21 15:02:04.126: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05771239s
STEP: Saw pod success
Dec 21 15:02:04.126: INFO: Pod "client-containers-0863d032-24cb-4329-925a-653e1dc5c332" satisfied condition "success or failure"
Dec 21 15:02:04.131: INFO: Trying to get logs from node iruya-node pod client-containers-0863d032-24cb-4329-925a-653e1dc5c332 container test-container: 
STEP: delete the pod
Dec 21 15:02:04.291: INFO: Waiting for pod client-containers-0863d032-24cb-4329-925a-653e1dc5c332 to disappear
Dec 21 15:02:04.379: INFO: Pod client-containers-0863d032-24cb-4329-925a-653e1dc5c332 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:02:04.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-497" for this suite.
Dec 21 15:02:10.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:02:10.575: INFO: namespace containers-497 deletion completed in 6.188599611s

• [SLOW TEST:16.669 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:02:10.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 21 15:02:10.758: INFO: Waiting up to 5m0s for pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72" in namespace "containers-914" to be "success or failure"
Dec 21 15:02:10.769: INFO: Pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663678ms
Dec 21 15:02:12.781: INFO: Pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022937082s
Dec 21 15:02:14.791: INFO: Pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032555554s
Dec 21 15:02:16.801: INFO: Pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042636747s
Dec 21 15:02:18.806: INFO: Pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048164855s
STEP: Saw pod success
Dec 21 15:02:18.806: INFO: Pod "client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72" satisfied condition "success or failure"
Dec 21 15:02:18.809: INFO: Trying to get logs from node iruya-node pod client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72 container test-container: 
STEP: delete the pod
Dec 21 15:02:18.957: INFO: Waiting for pod client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72 to disappear
Dec 21 15:02:18.968: INFO: Pod client-containers-5d879d25-f0d6-4c9d-9381-d5be9dc97d72 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:02:18.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-914" for this suite.
Dec 21 15:02:25.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:02:25.096: INFO: namespace containers-914 deletion completed in 6.121492786s

• [SLOW TEST:14.521 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:02:25.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-855
I1221 15:02:25.209776       9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-855, replica count: 1
I1221 15:02:26.260317       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:27.260590       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:28.260864       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:29.261213       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:30.261560       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:31.261913       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:32.262151       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 15:02:33.262472       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 21 15:02:33.485: INFO: Created: latency-svc-gsxs9
Dec 21 15:02:33.532: INFO: Got endpoints: latency-svc-gsxs9 [169.583687ms]
Dec 21 15:02:33.672: INFO: Created: latency-svc-5smdw
Dec 21 15:02:33.686: INFO: Got endpoints: latency-svc-5smdw [153.642169ms]
Dec 21 15:02:33.745: INFO: Created: latency-svc-26g9m
Dec 21 15:02:33.757: INFO: Got endpoints: latency-svc-26g9m [224.787373ms]
Dec 21 15:02:33.841: INFO: Created: latency-svc-6gwvt
Dec 21 15:02:33.870: INFO: Got endpoints: latency-svc-6gwvt [337.109444ms]
Dec 21 15:02:33.918: INFO: Created: latency-svc-khx4t
Dec 21 15:02:33.937: INFO: Got endpoints: latency-svc-khx4t [404.145796ms]
Dec 21 15:02:34.105: INFO: Created: latency-svc-h8mgm
Dec 21 15:02:34.111: INFO: Got endpoints: latency-svc-h8mgm [578.053182ms]
Dec 21 15:02:34.172: INFO: Created: latency-svc-qf4cx
Dec 21 15:02:34.247: INFO: Got endpoints: latency-svc-qf4cx [714.674555ms]
Dec 21 15:02:34.253: INFO: Created: latency-svc-ms7gh
Dec 21 15:02:34.263: INFO: Got endpoints: latency-svc-ms7gh [730.00729ms]
Dec 21 15:02:34.334: INFO: Created: latency-svc-q4ltn
Dec 21 15:02:34.520: INFO: Got endpoints: latency-svc-q4ltn [986.641234ms]
Dec 21 15:02:34.536: INFO: Created: latency-svc-snztm
Dec 21 15:02:34.550: INFO: Got endpoints: latency-svc-snztm [1.017102904s]
Dec 21 15:02:34.612: INFO: Created: latency-svc-kwqqx
Dec 21 15:02:34.801: INFO: Got endpoints: latency-svc-kwqqx [1.268012102s]
Dec 21 15:02:34.806: INFO: Created: latency-svc-9zd84
Dec 21 15:02:34.973: INFO: Created: latency-svc-dks5x
Dec 21 15:02:34.978: INFO: Got endpoints: latency-svc-9zd84 [1.444400949s]
Dec 21 15:02:35.000: INFO: Got endpoints: latency-svc-dks5x [1.466385461s]
Dec 21 15:02:35.049: INFO: Created: latency-svc-qhlkh
Dec 21 15:02:35.052: INFO: Got endpoints: latency-svc-qhlkh [1.518394961s]
Dec 21 15:02:35.244: INFO: Created: latency-svc-gjb2p
Dec 21 15:02:35.253: INFO: Got endpoints: latency-svc-gjb2p [1.719594765s]
Dec 21 15:02:35.323: INFO: Created: latency-svc-7vxbl
Dec 21 15:02:35.331: INFO: Got endpoints: latency-svc-7vxbl [1.798084235s]
Dec 21 15:02:35.457: INFO: Created: latency-svc-k69sn
Dec 21 15:02:35.481: INFO: Got endpoints: latency-svc-k69sn [1.794040517s]
Dec 21 15:02:35.626: INFO: Created: latency-svc-9zg8q
Dec 21 15:02:35.637: INFO: Got endpoints: latency-svc-9zg8q [1.879848867s]
Dec 21 15:02:35.680: INFO: Created: latency-svc-hgxt7
Dec 21 15:02:35.684: INFO: Got endpoints: latency-svc-hgxt7 [1.814000484s]
Dec 21 15:02:35.834: INFO: Created: latency-svc-hqq7m
Dec 21 15:02:35.845: INFO: Got endpoints: latency-svc-hqq7m [1.90813247s]
Dec 21 15:02:35.899: INFO: Created: latency-svc-t529c
Dec 21 15:02:35.905: INFO: Got endpoints: latency-svc-t529c [1.793269768s]
Dec 21 15:02:36.010: INFO: Created: latency-svc-qqxmz
Dec 21 15:02:36.022: INFO: Got endpoints: latency-svc-qqxmz [1.774627688s]
Dec 21 15:02:36.065: INFO: Created: latency-svc-wp2jk
Dec 21 15:02:36.076: INFO: Got endpoints: latency-svc-wp2jk [1.813529239s]
Dec 21 15:02:36.154: INFO: Created: latency-svc-n8drm
Dec 21 15:02:36.160: INFO: Got endpoints: latency-svc-n8drm [137.382871ms]
Dec 21 15:02:36.220: INFO: Created: latency-svc-j5ngj
Dec 21 15:02:36.229: INFO: Got endpoints: latency-svc-j5ngj [1.708431885s]
Dec 21 15:02:36.353: INFO: Created: latency-svc-hl4np
Dec 21 15:02:36.354: INFO: Got endpoints: latency-svc-hl4np [1.804148002s]
Dec 21 15:02:36.395: INFO: Created: latency-svc-t66c2
Dec 21 15:02:36.399: INFO: Got endpoints: latency-svc-t66c2 [1.597886609s]
Dec 21 15:02:36.516: INFO: Created: latency-svc-wtl2t
Dec 21 15:02:36.516: INFO: Got endpoints: latency-svc-wtl2t [1.537832471s]
Dec 21 15:02:36.576: INFO: Created: latency-svc-csh5f
Dec 21 15:02:36.586: INFO: Got endpoints: latency-svc-csh5f [1.585877445s]
Dec 21 15:02:36.699: INFO: Created: latency-svc-68rvj
Dec 21 15:02:36.741: INFO: Got endpoints: latency-svc-68rvj [1.689481865s]
Dec 21 15:02:36.746: INFO: Created: latency-svc-478bz
Dec 21 15:02:36.770: INFO: Got endpoints: latency-svc-478bz [1.517285993s]
Dec 21 15:02:36.851: INFO: Created: latency-svc-dn5d2
Dec 21 15:02:36.868: INFO: Got endpoints: latency-svc-dn5d2 [1.536800666s]
Dec 21 15:02:36.931: INFO: Created: latency-svc-pkn6d
Dec 21 15:02:36.936: INFO: Got endpoints: latency-svc-pkn6d [1.455577781s]
Dec 21 15:02:37.052: INFO: Created: latency-svc-jmfzm
Dec 21 15:02:37.076: INFO: Got endpoints: latency-svc-jmfzm [1.439134385s]
Dec 21 15:02:37.130: INFO: Created: latency-svc-jlpt6
Dec 21 15:02:37.290: INFO: Got endpoints: latency-svc-jlpt6 [1.605635455s]
Dec 21 15:02:37.301: INFO: Created: latency-svc-5xnns
Dec 21 15:02:37.318: INFO: Got endpoints: latency-svc-5xnns [1.472160978s]
Dec 21 15:02:37.373: INFO: Created: latency-svc-wb55k
Dec 21 15:02:37.384: INFO: Got endpoints: latency-svc-wb55k [1.478276537s]
Dec 21 15:02:37.497: INFO: Created: latency-svc-7tt92
Dec 21 15:02:37.507: INFO: Got endpoints: latency-svc-7tt92 [1.430914568s]
Dec 21 15:02:37.703: INFO: Created: latency-svc-2sr5k
Dec 21 15:02:37.731: INFO: Got endpoints: latency-svc-2sr5k [1.570986231s]
Dec 21 15:02:37.759: INFO: Created: latency-svc-hsd4l
Dec 21 15:02:37.771: INFO: Got endpoints: latency-svc-hsd4l [1.542411714s]
Dec 21 15:02:37.870: INFO: Created: latency-svc-z86kb
Dec 21 15:02:37.880: INFO: Got endpoints: latency-svc-z86kb [1.525384898s]
Dec 21 15:02:38.037: INFO: Created: latency-svc-4xts8
Dec 21 15:02:38.065: INFO: Got endpoints: latency-svc-4xts8 [1.665365159s]
Dec 21 15:02:38.106: INFO: Created: latency-svc-c4jd5
Dec 21 15:02:38.112: INFO: Got endpoints: latency-svc-c4jd5 [1.595915561s]
Dec 21 15:02:38.211: INFO: Created: latency-svc-dzg9k
Dec 21 15:02:38.215: INFO: Got endpoints: latency-svc-dzg9k [1.628641165s]
Dec 21 15:02:38.260: INFO: Created: latency-svc-btflw
Dec 21 15:02:38.274: INFO: Got endpoints: latency-svc-btflw [1.532753557s]
Dec 21 15:02:38.379: INFO: Created: latency-svc-ntxll
Dec 21 15:02:38.457: INFO: Got endpoints: latency-svc-ntxll [1.686505838s]
Dec 21 15:02:38.467: INFO: Created: latency-svc-nwn56
Dec 21 15:02:38.904: INFO: Got endpoints: latency-svc-nwn56 [2.035057249s]
Dec 21 15:02:38.944: INFO: Created: latency-svc-9wkhp
Dec 21 15:02:38.962: INFO: Got endpoints: latency-svc-9wkhp [2.025919129s]
Dec 21 15:02:39.131: INFO: Created: latency-svc-zzvcm
Dec 21 15:02:39.147: INFO: Got endpoints: latency-svc-zzvcm [2.069841455s]
Dec 21 15:02:39.202: INFO: Created: latency-svc-kpj75
Dec 21 15:02:39.214: INFO: Got endpoints: latency-svc-kpj75 [1.924324064s]
Dec 21 15:02:39.359: INFO: Created: latency-svc-b428v
Dec 21 15:02:39.371: INFO: Got endpoints: latency-svc-b428v [2.053273466s]
Dec 21 15:02:39.425: INFO: Created: latency-svc-ljpzs
Dec 21 15:02:39.584: INFO: Got endpoints: latency-svc-ljpzs [2.199964104s]
Dec 21 15:02:39.629: INFO: Created: latency-svc-9c6bz
Dec 21 15:02:39.762: INFO: Got endpoints: latency-svc-9c6bz [2.254675341s]
Dec 21 15:02:39.782: INFO: Created: latency-svc-26npv
Dec 21 15:02:39.804: INFO: Got endpoints: latency-svc-26npv [2.072243737s]
Dec 21 15:02:39.921: INFO: Created: latency-svc-qx6zl
Dec 21 15:02:39.940: INFO: Got endpoints: latency-svc-qx6zl [2.168762977s]
Dec 21 15:02:39.966: INFO: Created: latency-svc-w8czc
Dec 21 15:02:39.976: INFO: Got endpoints: latency-svc-w8czc [2.095643677s]
Dec 21 15:02:40.092: INFO: Created: latency-svc-j7wpf
Dec 21 15:02:40.104: INFO: Got endpoints: latency-svc-j7wpf [2.039441183s]
Dec 21 15:02:40.146: INFO: Created: latency-svc-ktl8n
Dec 21 15:02:40.150: INFO: Got endpoints: latency-svc-ktl8n [2.037826422s]
Dec 21 15:02:40.272: INFO: Created: latency-svc-bqqvp
Dec 21 15:02:40.281: INFO: Got endpoints: latency-svc-bqqvp [2.066621234s]
Dec 21 15:02:40.333: INFO: Created: latency-svc-t9nhh
Dec 21 15:02:40.355: INFO: Got endpoints: latency-svc-t9nhh [2.080056507s]
Dec 21 15:02:40.356: INFO: Created: latency-svc-tjvz7
Dec 21 15:02:40.464: INFO: Got endpoints: latency-svc-tjvz7 [2.005861464s]
Dec 21 15:02:40.497: INFO: Created: latency-svc-qpb9v
Dec 21 15:02:40.506: INFO: Got endpoints: latency-svc-qpb9v [1.601552887s]
Dec 21 15:02:40.552: INFO: Created: latency-svc-mqqq8
Dec 21 15:02:40.561: INFO: Got endpoints: latency-svc-mqqq8 [1.59838667s]
Dec 21 15:02:40.646: INFO: Created: latency-svc-d5vpq
Dec 21 15:02:40.651: INFO: Got endpoints: latency-svc-d5vpq [1.503766955s]
Dec 21 15:02:40.697: INFO: Created: latency-svc-h24qq
Dec 21 15:02:40.805: INFO: Got endpoints: latency-svc-h24qq [1.590433672s]
Dec 21 15:02:40.815: INFO: Created: latency-svc-d4rxv
Dec 21 15:02:40.853: INFO: Got endpoints: latency-svc-d4rxv [1.481063594s]
Dec 21 15:02:40.868: INFO: Created: latency-svc-bp89s
Dec 21 15:02:40.871: INFO: Got endpoints: latency-svc-bp89s [1.28687435s]
Dec 21 15:02:41.069: INFO: Created: latency-svc-qn5c6
Dec 21 15:02:41.091: INFO: Got endpoints: latency-svc-qn5c6 [1.328491226s]
Dec 21 15:02:41.286: INFO: Created: latency-svc-chtv6
Dec 21 15:02:41.303: INFO: Got endpoints: latency-svc-chtv6 [1.499615368s]
Dec 21 15:02:41.468: INFO: Created: latency-svc-2n8dd
Dec 21 15:02:41.530: INFO: Got endpoints: latency-svc-2n8dd [1.59010462s]
Dec 21 15:02:41.534: INFO: Created: latency-svc-748lm
Dec 21 15:02:41.542: INFO: Got endpoints: latency-svc-748lm [1.565735694s]
Dec 21 15:02:41.755: INFO: Created: latency-svc-77244
Dec 21 15:02:41.772: INFO: Got endpoints: latency-svc-77244 [1.666958904s]
Dec 21 15:02:41.853: INFO: Created: latency-svc-8dp29
Dec 21 15:02:41.949: INFO: Got endpoints: latency-svc-8dp29 [1.799112345s]
Dec 21 15:02:41.955: INFO: Created: latency-svc-6pskq
Dec 21 15:02:41.977: INFO: Got endpoints: latency-svc-6pskq [1.695089569s]
Dec 21 15:02:42.017: INFO: Created: latency-svc-vlvr8
Dec 21 15:02:42.087: INFO: Got endpoints: latency-svc-vlvr8 [1.732039696s]
Dec 21 15:02:42.087: INFO: Created: latency-svc-ldm4p
Dec 21 15:02:42.096: INFO: Got endpoints: latency-svc-ldm4p [1.632578379s]
Dec 21 15:02:42.139: INFO: Created: latency-svc-d9k6x
Dec 21 15:02:42.148: INFO: Got endpoints: latency-svc-d9k6x [1.641353724s]
Dec 21 15:02:42.203: INFO: Created: latency-svc-r6dfq
Dec 21 15:02:42.283: INFO: Got endpoints: latency-svc-r6dfq [1.721596082s]
Dec 21 15:02:42.342: INFO: Created: latency-svc-8m4hm
Dec 21 15:02:42.343: INFO: Got endpoints: latency-svc-8m4hm [1.691711416s]
Dec 21 15:02:42.434: INFO: Created: latency-svc-56c75
Dec 21 15:02:42.442: INFO: Got endpoints: latency-svc-56c75 [1.637220984s]
Dec 21 15:02:42.494: INFO: Created: latency-svc-ftpk8
Dec 21 15:02:42.511: INFO: Got endpoints: latency-svc-ftpk8 [1.657116605s]
Dec 21 15:02:42.616: INFO: Created: latency-svc-gzbr8
Dec 21 15:02:42.633: INFO: Got endpoints: latency-svc-gzbr8 [1.762300671s]
Dec 21 15:02:42.695: INFO: Created: latency-svc-zzwqq
Dec 21 15:02:42.802: INFO: Got endpoints: latency-svc-zzwqq [1.711177461s]
Dec 21 15:02:42.848: INFO: Created: latency-svc-cjzvd
Dec 21 15:02:42.848: INFO: Got endpoints: latency-svc-cjzvd [1.544381913s]
Dec 21 15:02:42.970: INFO: Created: latency-svc-994g8
Dec 21 15:02:42.973: INFO: Got endpoints: latency-svc-994g8 [1.441520436s]
Dec 21 15:02:43.015: INFO: Created: latency-svc-zqdjn
Dec 21 15:02:43.020: INFO: Got endpoints: latency-svc-zqdjn [1.478019469s]
Dec 21 15:02:43.199: INFO: Created: latency-svc-glcnc
Dec 21 15:02:43.211: INFO: Got endpoints: latency-svc-glcnc [1.438873376s]
Dec 21 15:02:43.256: INFO: Created: latency-svc-6vxmb
Dec 21 15:02:43.270: INFO: Got endpoints: latency-svc-6vxmb [1.320568681s]
Dec 21 15:02:43.440: INFO: Created: latency-svc-8zt7z
Dec 21 15:02:43.452: INFO: Got endpoints: latency-svc-8zt7z [1.474791053s]
Dec 21 15:02:43.512: INFO: Created: latency-svc-lfz6s
Dec 21 15:02:43.528: INFO: Got endpoints: latency-svc-lfz6s [1.440419019s]
Dec 21 15:02:43.683: INFO: Created: latency-svc-szfqh
Dec 21 15:02:43.693: INFO: Got endpoints: latency-svc-szfqh [1.596776156s]
Dec 21 15:02:43.742: INFO: Created: latency-svc-nqssw
Dec 21 15:02:43.864: INFO: Created: latency-svc-k4cw7
Dec 21 15:02:43.869: INFO: Got endpoints: latency-svc-nqssw [1.721496771s]
Dec 21 15:02:43.888: INFO: Got endpoints: latency-svc-k4cw7 [1.604341667s]
Dec 21 15:02:44.170: INFO: Created: latency-svc-5v7k7
Dec 21 15:02:44.183: INFO: Got endpoints: latency-svc-5v7k7 [1.840777007s]
Dec 21 15:02:44.400: INFO: Created: latency-svc-ntfh5
Dec 21 15:02:44.447: INFO: Got endpoints: latency-svc-ntfh5 [2.004008529s]
Dec 21 15:02:44.462: INFO: Created: latency-svc-s7stf
Dec 21 15:02:44.470: INFO: Got endpoints: latency-svc-s7stf [1.95841046s]
Dec 21 15:02:44.588: INFO: Created: latency-svc-xpfth
Dec 21 15:02:44.668: INFO: Created: latency-svc-cnrst
Dec 21 15:02:44.674: INFO: Got endpoints: latency-svc-xpfth [2.039999966s]
Dec 21 15:02:44.801: INFO: Got endpoints: latency-svc-cnrst [1.998631656s]
Dec 21 15:02:44.817: INFO: Created: latency-svc-kr2gf
Dec 21 15:02:44.869: INFO: Got endpoints: latency-svc-kr2gf [2.020835903s]
Dec 21 15:02:45.063: INFO: Created: latency-svc-zvqj7
Dec 21 15:02:45.082: INFO: Got endpoints: latency-svc-zvqj7 [2.108990647s]
Dec 21 15:02:45.143: INFO: Created: latency-svc-cmk4b
Dec 21 15:02:45.343: INFO: Got endpoints: latency-svc-cmk4b [2.323281081s]
Dec 21 15:02:45.353: INFO: Created: latency-svc-5txkr
Dec 21 15:02:45.364: INFO: Got endpoints: latency-svc-5txkr [2.152449579s]
Dec 21 15:02:45.414: INFO: Created: latency-svc-lnjg6
Dec 21 15:02:45.580: INFO: Got endpoints: latency-svc-lnjg6 [2.30944965s]
Dec 21 15:02:45.594: INFO: Created: latency-svc-lqjwb
Dec 21 15:02:45.603: INFO: Got endpoints: latency-svc-lqjwb [2.15126232s]
Dec 21 15:02:45.655: INFO: Created: latency-svc-pkg26
Dec 21 15:02:45.661: INFO: Got endpoints: latency-svc-pkg26 [2.133658234s]
Dec 21 15:02:45.792: INFO: Created: latency-svc-852kp
Dec 21 15:02:45.862: INFO: Got endpoints: latency-svc-852kp [2.167350319s]
Dec 21 15:02:45.863: INFO: Created: latency-svc-4rg8j
Dec 21 15:02:45.982: INFO: Got endpoints: latency-svc-4rg8j [2.11285638s]
Dec 21 15:02:45.991: INFO: Created: latency-svc-gmx8w
Dec 21 15:02:46.020: INFO: Got endpoints: latency-svc-gmx8w [2.131654621s]
Dec 21 15:02:46.049: INFO: Created: latency-svc-f65tq
Dec 21 15:02:46.059: INFO: Got endpoints: latency-svc-f65tq [1.874955817s]
Dec 21 15:02:46.148: INFO: Created: latency-svc-vgcq6
Dec 21 15:02:46.148: INFO: Got endpoints: latency-svc-vgcq6 [1.701564697s]
Dec 21 15:02:46.194: INFO: Created: latency-svc-n2gh2
Dec 21 15:02:46.199: INFO: Got endpoints: latency-svc-n2gh2 [1.729318486s]
Dec 21 15:02:46.344: INFO: Created: latency-svc-zlm7v
Dec 21 15:02:46.351: INFO: Got endpoints: latency-svc-zlm7v [1.676563434s]
Dec 21 15:02:46.392: INFO: Created: latency-svc-qtd2k
Dec 21 15:02:46.400: INFO: Got endpoints: latency-svc-qtd2k [1.597998564s]
Dec 21 15:02:46.526: INFO: Created: latency-svc-vm6bc
Dec 21 15:02:46.559: INFO: Got endpoints: latency-svc-vm6bc [1.689244429s]
Dec 21 15:02:46.581: INFO: Created: latency-svc-w75dn
Dec 21 15:02:46.581: INFO: Got endpoints: latency-svc-w75dn [1.49938192s]
Dec 21 15:02:46.778: INFO: Created: latency-svc-rt4sk
Dec 21 15:02:46.787: INFO: Got endpoints: latency-svc-rt4sk [1.443087751s]
Dec 21 15:02:46.832: INFO: Created: latency-svc-jtjp9
Dec 21 15:02:46.836: INFO: Got endpoints: latency-svc-jtjp9 [1.471901408s]
Dec 21 15:02:46.875: INFO: Created: latency-svc-qbbx2
Dec 21 15:02:46.990: INFO: Got endpoints: latency-svc-qbbx2 [1.410024211s]
Dec 21 15:02:47.015: INFO: Created: latency-svc-88hsb
Dec 21 15:02:47.024: INFO: Got endpoints: latency-svc-88hsb [1.420732469s]
Dec 21 15:02:47.069: INFO: Created: latency-svc-wtg5z
Dec 21 15:02:47.076: INFO: Got endpoints: latency-svc-wtg5z [1.414517448s]
Dec 21 15:02:47.182: INFO: Created: latency-svc-bj958
Dec 21 15:02:47.192: INFO: Got endpoints: latency-svc-bj958 [1.329833985s]
Dec 21 15:02:47.239: INFO: Created: latency-svc-6n9p8
Dec 21 15:02:47.249: INFO: Got endpoints: latency-svc-6n9p8 [1.266696016s]
Dec 21 15:02:47.358: INFO: Created: latency-svc-6rx5n
Dec 21 15:02:47.363: INFO: Got endpoints: latency-svc-6rx5n [1.342907344s]
Dec 21 15:02:47.416: INFO: Created: latency-svc-x8xcc
Dec 21 15:02:47.420: INFO: Got endpoints: latency-svc-x8xcc [1.3614541s]
Dec 21 15:02:47.570: INFO: Created: latency-svc-lpwhd
Dec 21 15:02:47.581: INFO: Got endpoints: latency-svc-lpwhd [1.432601354s]
Dec 21 15:02:47.652: INFO: Created: latency-svc-rpmqp
Dec 21 15:02:47.653: INFO: Got endpoints: latency-svc-rpmqp [1.453461577s]
Dec 21 15:02:47.887: INFO: Created: latency-svc-h62c5
Dec 21 15:02:47.902: INFO: Got endpoints: latency-svc-h62c5 [1.55091481s]
Dec 21 15:02:47.964: INFO: Created: latency-svc-kzgxp
Dec 21 15:02:48.073: INFO: Got endpoints: latency-svc-kzgxp [1.673095598s]
Dec 21 15:02:48.083: INFO: Created: latency-svc-7zklw
Dec 21 15:02:48.100: INFO: Got endpoints: latency-svc-7zklw [1.540879844s]
Dec 21 15:02:48.153: INFO: Created: latency-svc-s94hx
Dec 21 15:02:48.229: INFO: Got endpoints: latency-svc-s94hx [1.647199747s]
Dec 21 15:02:48.241: INFO: Created: latency-svc-l22td
Dec 21 15:02:48.285: INFO: Got endpoints: latency-svc-l22td [1.498635847s]
Dec 21 15:02:48.393: INFO: Created: latency-svc-nrk79
Dec 21 15:02:48.408: INFO: Got endpoints: latency-svc-nrk79 [1.571932493s]
Dec 21 15:02:48.443: INFO: Created: latency-svc-5dqzp
Dec 21 15:02:48.481: INFO: Got endpoints: latency-svc-5dqzp [1.490486609s]
Dec 21 15:02:48.579: INFO: Created: latency-svc-6xfq6
Dec 21 15:02:48.599: INFO: Got endpoints: latency-svc-6xfq6 [1.574861849s]
Dec 21 15:02:48.641: INFO: Created: latency-svc-fvwz9
Dec 21 15:02:48.727: INFO: Got endpoints: latency-svc-fvwz9 [1.650524485s]
Dec 21 15:02:48.736: INFO: Created: latency-svc-ftd7d
Dec 21 15:02:48.740: INFO: Got endpoints: latency-svc-ftd7d [1.548073799s]
Dec 21 15:02:48.816: INFO: Created: latency-svc-7rscp
Dec 21 15:02:48.909: INFO: Got endpoints: latency-svc-7rscp [1.658635406s]
Dec 21 15:02:48.929: INFO: Created: latency-svc-sbrzp
Dec 21 15:02:48.945: INFO: Got endpoints: latency-svc-sbrzp [1.582448892s]
Dec 21 15:02:48.980: INFO: Created: latency-svc-pchcd
Dec 21 15:02:48.997: INFO: Got endpoints: latency-svc-pchcd [1.576176789s]
Dec 21 15:02:49.165: INFO: Created: latency-svc-d25ld
Dec 21 15:02:49.176: INFO: Got endpoints: latency-svc-d25ld [1.594636591s]
Dec 21 15:02:49.391: INFO: Created: latency-svc-xq8ds
Dec 21 15:02:49.407: INFO: Got endpoints: latency-svc-xq8ds [1.75425112s]
Dec 21 15:02:49.688: INFO: Created: latency-svc-c9twl
Dec 21 15:02:49.699: INFO: Got endpoints: latency-svc-c9twl [1.796666831s]
Dec 21 15:02:49.783: INFO: Created: latency-svc-cq2lv
Dec 21 15:02:49.932: INFO: Got endpoints: latency-svc-cq2lv [1.858753175s]
Dec 21 15:02:50.029: INFO: Created: latency-svc-bf5q6
Dec 21 15:02:50.115: INFO: Got endpoints: latency-svc-bf5q6 [2.014320029s]
Dec 21 15:02:50.444: INFO: Created: latency-svc-t655k
Dec 21 15:02:50.469: INFO: Got endpoints: latency-svc-t655k [2.239759253s]
Dec 21 15:02:50.521: INFO: Created: latency-svc-tpqhw
Dec 21 15:02:50.669: INFO: Got endpoints: latency-svc-tpqhw [2.383325258s]
Dec 21 15:02:50.680: INFO: Created: latency-svc-vs6vl
Dec 21 15:02:50.687: INFO: Got endpoints: latency-svc-vs6vl [2.278648263s]
Dec 21 15:02:50.755: INFO: Created: latency-svc-vbfms
Dec 21 15:02:50.759: INFO: Got endpoints: latency-svc-vbfms [2.278161164s]
Dec 21 15:02:50.945: INFO: Created: latency-svc-kcjpb
Dec 21 15:02:50.957: INFO: Got endpoints: latency-svc-kcjpb [2.357705025s]
Dec 21 15:02:51.010: INFO: Created: latency-svc-cjvnt
Dec 21 15:02:51.011: INFO: Got endpoints: latency-svc-cjvnt [2.284680847s]
Dec 21 15:02:51.095: INFO: Created: latency-svc-rbpqf
Dec 21 15:02:51.124: INFO: Got endpoints: latency-svc-rbpqf [2.384221979s]
Dec 21 15:02:51.129: INFO: Created: latency-svc-jsn85
Dec 21 15:02:51.132: INFO: Got endpoints: latency-svc-jsn85 [2.22325094s]
Dec 21 15:02:51.177: INFO: Created: latency-svc-jcfpx
Dec 21 15:02:51.270: INFO: Got endpoints: latency-svc-jcfpx [2.323984219s]
Dec 21 15:02:51.318: INFO: Created: latency-svc-w5lp7
Dec 21 15:02:51.337: INFO: Got endpoints: latency-svc-w5lp7 [2.339851751s]
Dec 21 15:02:51.430: INFO: Created: latency-svc-d62c8
Dec 21 15:02:51.430: INFO: Got endpoints: latency-svc-d62c8 [2.253839063s]
Dec 21 15:02:51.469: INFO: Created: latency-svc-p2hws
Dec 21 15:02:51.512: INFO: Got endpoints: latency-svc-p2hws [2.104363637s]
Dec 21 15:02:51.535: INFO: Created: latency-svc-47cft
Dec 21 15:02:51.626: INFO: Got endpoints: latency-svc-47cft [1.927069735s]
Dec 21 15:02:51.632: INFO: Created: latency-svc-54wwf
Dec 21 15:02:51.647: INFO: Got endpoints: latency-svc-54wwf [1.715153588s]
Dec 21 15:02:51.731: INFO: Created: latency-svc-gsgxq
Dec 21 15:02:51.893: INFO: Got endpoints: latency-svc-gsgxq [1.778461815s]
Dec 21 15:02:51.902: INFO: Created: latency-svc-zdvhc
Dec 21 15:02:51.935: INFO: Got endpoints: latency-svc-zdvhc [1.46620108s]
Dec 21 15:02:52.027: INFO: Created: latency-svc-knmcj
Dec 21 15:02:52.037: INFO: Got endpoints: latency-svc-knmcj [1.367960431s]
Dec 21 15:02:52.078: INFO: Created: latency-svc-2fbdv
Dec 21 15:02:52.084: INFO: Got endpoints: latency-svc-2fbdv [1.397814713s]
Dec 21 15:02:52.211: INFO: Created: latency-svc-7tl28
Dec 21 15:02:52.211: INFO: Got endpoints: latency-svc-7tl28 [1.451871746s]
Dec 21 15:02:52.258: INFO: Created: latency-svc-vhqkl
Dec 21 15:02:52.268: INFO: Got endpoints: latency-svc-vhqkl [1.310448853s]
Dec 21 15:02:52.370: INFO: Created: latency-svc-79m5m
Dec 21 15:02:52.375: INFO: Got endpoints: latency-svc-79m5m [1.363482056s]
Dec 21 15:02:52.444: INFO: Created: latency-svc-qmwmw
Dec 21 15:02:52.454: INFO: Got endpoints: latency-svc-qmwmw [1.329534417s]
Dec 21 15:02:52.558: INFO: Created: latency-svc-84gvs
Dec 21 15:02:52.594: INFO: Got endpoints: latency-svc-84gvs [1.462301312s]
Dec 21 15:02:52.638: INFO: Created: latency-svc-wwwdz
Dec 21 15:02:52.718: INFO: Got endpoints: latency-svc-wwwdz [1.448580152s]
Dec 21 15:02:52.769: INFO: Created: latency-svc-lxtvx
Dec 21 15:02:52.777: INFO: Got endpoints: latency-svc-lxtvx [1.440360539s]
Dec 21 15:02:52.840: INFO: Created: latency-svc-lljmr
Dec 21 15:02:52.886: INFO: Got endpoints: latency-svc-lljmr [1.456255358s]
Dec 21 15:02:52.908: INFO: Created: latency-svc-6xmvx
Dec 21 15:02:52.928: INFO: Got endpoints: latency-svc-6xmvx [1.415887911s]
Dec 21 15:02:52.993: INFO: Created: latency-svc-gqjhr
Dec 21 15:02:53.043: INFO: Got endpoints: latency-svc-gqjhr [1.41722652s]
Dec 21 15:02:53.082: INFO: Created: latency-svc-xcsk4
Dec 21 15:02:53.101: INFO: Got endpoints: latency-svc-xcsk4 [1.452916332s]
Dec 21 15:02:53.140: INFO: Created: latency-svc-v7r8k
Dec 21 15:02:53.220: INFO: Got endpoints: latency-svc-v7r8k [1.325911912s]
Dec 21 15:02:53.251: INFO: Created: latency-svc-7msvp
Dec 21 15:02:53.282: INFO: Got endpoints: latency-svc-7msvp [1.346482303s]
Dec 21 15:02:53.432: INFO: Created: latency-svc-xwzgw
Dec 21 15:02:53.459: INFO: Got endpoints: latency-svc-xwzgw [1.421491685s]
Dec 21 15:02:53.495: INFO: Created: latency-svc-6zj27
Dec 21 15:02:53.495: INFO: Got endpoints: latency-svc-6zj27 [1.410199202s]
Dec 21 15:02:53.620: INFO: Created: latency-svc-k2gtx
Dec 21 15:02:53.646: INFO: Got endpoints: latency-svc-k2gtx [1.434508368s]
Dec 21 15:02:53.800: INFO: Created: latency-svc-mc5zg
Dec 21 15:02:53.819: INFO: Got endpoints: latency-svc-mc5zg [1.55174237s]
Dec 21 15:02:53.922: INFO: Created: latency-svc-vnrtb
Dec 21 15:02:53.998: INFO: Got endpoints: latency-svc-vnrtb [1.623254769s]
Dec 21 15:02:54.033: INFO: Created: latency-svc-kj2jr
Dec 21 15:02:54.067: INFO: Got endpoints: latency-svc-kj2jr [1.61324538s]
Dec 21 15:02:54.077: INFO: Created: latency-svc-gljzx
Dec 21 15:02:54.084: INFO: Got endpoints: latency-svc-gljzx [1.489775579s]
Dec 21 15:02:54.167: INFO: Created: latency-svc-7fvjz
Dec 21 15:02:54.179: INFO: Got endpoints: latency-svc-7fvjz [1.460856173s]
Dec 21 15:02:54.221: INFO: Created: latency-svc-qpfv2
Dec 21 15:02:54.221: INFO: Got endpoints: latency-svc-qpfv2 [1.443700832s]
Dec 21 15:02:54.296: INFO: Created: latency-svc-69zx6
Dec 21 15:02:54.310: INFO: Got endpoints: latency-svc-69zx6 [1.423978026s]
Dec 21 15:02:54.362: INFO: Created: latency-svc-2hs47
Dec 21 15:02:54.378: INFO: Got endpoints: latency-svc-2hs47 [1.450590984s]
Dec 21 15:02:54.512: INFO: Created: latency-svc-vjs5b
Dec 21 15:02:54.528: INFO: Got endpoints: latency-svc-vjs5b [1.484222839s]
Dec 21 15:02:54.622: INFO: Created: latency-svc-84bkt
Dec 21 15:02:54.708: INFO: Got endpoints: latency-svc-84bkt [1.60724058s]
Dec 21 15:02:54.775: INFO: Created: latency-svc-sf6f7
Dec 21 15:02:54.788: INFO: Got endpoints: latency-svc-sf6f7 [1.568306487s]
Dec 21 15:02:54.943: INFO: Created: latency-svc-8rhhf
Dec 21 15:02:55.068: INFO: Got endpoints: latency-svc-8rhhf [1.786140136s]
Dec 21 15:02:55.116: INFO: Created: latency-svc-hqlv7
Dec 21 15:02:55.116: INFO: Got endpoints: latency-svc-hqlv7 [1.657413435s]
Dec 21 15:02:55.225: INFO: Created: latency-svc-9npt7
Dec 21 15:02:55.252: INFO: Got endpoints: latency-svc-9npt7 [1.756733696s]
Dec 21 15:02:55.304: INFO: Created: latency-svc-9bsvm
Dec 21 15:02:55.429: INFO: Got endpoints: latency-svc-9bsvm [1.782312479s]
Dec 21 15:02:55.476: INFO: Created: latency-svc-sv2pr
Dec 21 15:02:55.508: INFO: Got endpoints: latency-svc-sv2pr [1.688213978s]
Dec 21 15:02:55.613: INFO: Created: latency-svc-sstct
Dec 21 15:02:55.623: INFO: Got endpoints: latency-svc-sstct [1.624675025s]
Dec 21 15:02:55.796: INFO: Created: latency-svc-5dwsz
Dec 21 15:02:55.817: INFO: Got endpoints: latency-svc-5dwsz [1.749852624s]
Dec 21 15:02:55.987: INFO: Created: latency-svc-sw5ng
Dec 21 15:02:55.999: INFO: Got endpoints: latency-svc-sw5ng [1.91440318s]
Dec 21 15:02:56.140: INFO: Created: latency-svc-jjgpf
Dec 21 15:02:56.239: INFO: Got endpoints: latency-svc-jjgpf [2.058909524s]
Dec 21 15:02:56.282: INFO: Created: latency-svc-gq8gg
Dec 21 15:02:56.292: INFO: Got endpoints: latency-svc-gq8gg [2.070530245s]
Dec 21 15:02:56.333: INFO: Created: latency-svc-hrlwr
Dec 21 15:02:56.377: INFO: Got endpoints: latency-svc-hrlwr [2.066219604s]
Dec 21 15:02:56.441: INFO: Created: latency-svc-8z7mp
Dec 21 15:02:56.441: INFO: Got endpoints: latency-svc-8z7mp [2.06215966s]
Dec 21 15:02:56.441: INFO: Latencies: [137.382871ms 153.642169ms 224.787373ms 337.109444ms 404.145796ms 578.053182ms 714.674555ms 730.00729ms 986.641234ms 1.017102904s 1.266696016s 1.268012102s 1.28687435s 1.310448853s 1.320568681s 1.325911912s 1.328491226s 1.329534417s 1.329833985s 1.342907344s 1.346482303s 1.3614541s 1.363482056s 1.367960431s 1.397814713s 1.410024211s 1.410199202s 1.414517448s 1.415887911s 1.41722652s 1.420732469s 1.421491685s 1.423978026s 1.430914568s 1.432601354s 1.434508368s 1.438873376s 1.439134385s 1.440360539s 1.440419019s 1.441520436s 1.443087751s 1.443700832s 1.444400949s 1.448580152s 1.450590984s 1.451871746s 1.452916332s 1.453461577s 1.455577781s 1.456255358s 1.460856173s 1.462301312s 1.46620108s 1.466385461s 1.471901408s 1.472160978s 1.474791053s 1.478019469s 1.478276537s 1.481063594s 1.484222839s 1.489775579s 1.490486609s 1.498635847s 1.49938192s 1.499615368s 1.503766955s 1.517285993s 1.518394961s 1.525384898s 1.532753557s 1.536800666s 1.537832471s 1.540879844s 1.542411714s 1.544381913s 1.548073799s 1.55091481s 1.55174237s 1.565735694s 1.568306487s 1.570986231s 1.571932493s 1.574861849s 1.576176789s 1.582448892s 1.585877445s 1.59010462s 1.590433672s 1.594636591s 1.595915561s 1.596776156s 1.597886609s 1.597998564s 1.59838667s 1.601552887s 1.604341667s 1.605635455s 1.60724058s 1.61324538s 1.623254769s 1.624675025s 1.628641165s 1.632578379s 1.637220984s 1.641353724s 1.647199747s 1.650524485s 1.657116605s 1.657413435s 1.658635406s 1.665365159s 1.666958904s 1.673095598s 1.676563434s 1.686505838s 1.688213978s 1.689244429s 1.689481865s 1.691711416s 1.695089569s 1.701564697s 1.708431885s 1.711177461s 1.715153588s 1.719594765s 1.721496771s 1.721596082s 1.729318486s 1.732039696s 1.749852624s 1.75425112s 1.756733696s 1.762300671s 1.774627688s 1.778461815s 1.782312479s 1.786140136s 1.793269768s 1.794040517s 1.796666831s 1.798084235s 1.799112345s 1.804148002s 1.813529239s 1.814000484s 1.840777007s 1.858753175s 1.874955817s 1.879848867s 1.90813247s 1.91440318s 1.924324064s 1.927069735s 1.95841046s 1.998631656s 2.004008529s 2.005861464s 2.014320029s 2.020835903s 2.025919129s 2.035057249s 2.037826422s 2.039441183s 2.039999966s 2.053273466s 2.058909524s 2.06215966s 2.066219604s 2.066621234s 2.069841455s 2.070530245s 2.072243737s 2.080056507s 2.095643677s 2.104363637s 2.108990647s 2.11285638s 2.131654621s 2.133658234s 2.15126232s 2.152449579s 2.167350319s 2.168762977s 2.199964104s 2.22325094s 2.239759253s 2.253839063s 2.254675341s 2.278161164s 2.278648263s 2.284680847s 2.30944965s 2.323281081s 2.323984219s 2.339851751s 2.357705025s 2.383325258s 2.384221979s]
Dec 21 15:02:56.441: INFO: 50 %ile: 1.61324538s
Dec 21 15:02:56.441: INFO: 90 %ile: 2.133658234s
Dec 21 15:02:56.441: INFO: 99 %ile: 2.383325258s
Dec 21 15:02:56.441: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:02:56.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-855" for this suite.
Dec 21 15:03:34.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:03:34.632: INFO: namespace svc-latency-855 deletion completed in 38.180726156s

• [SLOW TEST:69.535 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:03:34.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Dec 21 15:03:42.848: INFO: Pod pod-hostip-282abfbc-b1bf-4f5e-b4ab-2da2bd323ea5 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:03:42.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4623" for this suite.
Dec 21 15:04:04.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:04:04.992: INFO: namespace pods-4623 deletion completed in 22.138089551s

• [SLOW TEST:30.359 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:04:04.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 21 15:04:13.277: INFO: Waiting up to 5m0s for pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49" in namespace "pods-5606" to be "success or failure"
Dec 21 15:04:13.373: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49": Phase="Pending", Reason="", readiness=false. Elapsed: 95.49812ms
Dec 21 15:04:15.378: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101390065s
Dec 21 15:04:17.388: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110738009s
Dec 21 15:04:19.395: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117969713s
Dec 21 15:04:21.402: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124778856s
Dec 21 15:04:23.410: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132539236s
STEP: Saw pod success
Dec 21 15:04:23.410: INFO: Pod "client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49" satisfied condition "success or failure"
Dec 21 15:04:23.414: INFO: Trying to get logs from node iruya-node pod client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49 container env3cont: 
STEP: delete the pod
Dec 21 15:04:23.638: INFO: Waiting for pod client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49 to disappear
Dec 21 15:04:23.687: INFO: Pod client-envvars-e59caae5-aeff-459d-9ee8-c26765296a49 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:04:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5606" for this suite.
Dec 21 15:05:09.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:05:09.929: INFO: namespace pods-5606 deletion completed in 46.229839671s

• [SLOW TEST:64.937 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:05:09.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 21 15:05:10.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532" in namespace "downward-api-3566" to be "success or failure"
Dec 21 15:05:10.073: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532": Phase="Pending", Reason="", readiness=false. Elapsed: 66.96976ms
Dec 21 15:05:12.088: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082021996s
Dec 21 15:05:14.118: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111774934s
Dec 21 15:05:16.134: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12808904s
Dec 21 15:05:18.142: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136484945s
Dec 21 15:05:20.188: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182155727s
STEP: Saw pod success
Dec 21 15:05:20.188: INFO: Pod "downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532" satisfied condition "success or failure"
Dec 21 15:05:20.194: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532 container client-container: 
STEP: delete the pod
Dec 21 15:05:20.919: INFO: Waiting for pod downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532 to disappear
Dec 21 15:05:20.929: INFO: Pod downwardapi-volume-fbd862b4-dcf1-4391-9f57-3c20ed554532 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:05:20.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3566" for this suite.
Dec 21 15:05:26.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:05:27.075: INFO: namespace downward-api-3566 deletion completed in 6.139388826s

• [SLOW TEST:17.146 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:05:27.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 21 15:05:39.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-d617330c-c72c-44ff-9ce1-82a6315ee4ca -c busybox-main-container --namespace=emptydir-7351 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 21 15:05:40.002: INFO: stderr: ""
Dec 21 15:05:40.002: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:05:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7351" for this suite.
Dec 21 15:05:46.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:05:46.151: INFO: namespace emptydir-7351 deletion completed in 6.132123904s

• [SLOW TEST:19.075 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:05:46.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 21 15:05:58.795: INFO: Successfully updated pod "pod-update-40749445-02ba-4d22-80d4-64e0ada6255d"
STEP: verifying the updated pod is in kubernetes
Dec 21 15:05:58.839: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:05:58.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3423" for this suite.
Dec 21 15:06:20.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:06:20.973: INFO: namespace pods-3423 deletion completed in 22.12787305s

• [SLOW TEST:34.821 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:06:20.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee
Dec 21 15:06:21.115: INFO: Pod name my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee: Found 0 pods out of 1
Dec 21 15:06:26.128: INFO: Pod name my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee: Found 1 pods out of 1
Dec 21 15:06:26.128: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee" are running
Dec 21 15:06:30.175: INFO: Pod "my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee-bpmgz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 15:06:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 15:06:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 15:06:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 15:06:21 +0000 UTC Reason: Message:}])
Dec 21 15:06:30.176: INFO: Trying to dial the pod
Dec 21 15:06:35.230: INFO: Controller my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee: Got expected result from replica 1 [my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee-bpmgz]: "my-hostname-basic-b9cd0ff5-f235-45b4-9bd6-c408f68dd0ee-bpmgz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:06:35.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1681" for this suite.
Dec 21 15:06:41.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:06:41.427: INFO: namespace replication-controller-1681 deletion completed in 6.189498383s

• [SLOW TEST:20.454 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:06:41.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-75e017ec-ad47-48e5-9f43-b0cf59cca0fc in namespace container-probe-2495
Dec 21 15:06:51.542: INFO: Started pod busybox-75e017ec-ad47-48e5-9f43-b0cf59cca0fc in namespace container-probe-2495
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 15:06:51.545: INFO: Initial restart count of pod busybox-75e017ec-ad47-48e5-9f43-b0cf59cca0fc is 0
Dec 21 15:07:43.838: INFO: Restart count of pod container-probe-2495/busybox-75e017ec-ad47-48e5-9f43-b0cf59cca0fc is now 1 (52.29275748s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:07:43.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2495" for this suite.
Dec 21 15:07:49.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:07:50.051: INFO: namespace container-probe-2495 deletion completed in 6.108554779s

• [SLOW TEST:68.624 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:07:50.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-6b12cade-dd11-4f11-843c-9305ddc2812d
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:07:50.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-608" for this suite.
Dec 21 15:07:56.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:07:56.288: INFO: namespace secrets-608 deletion completed in 6.181933405s

• [SLOW TEST:6.237 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:07:56.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:08:04.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5641" for this suite.
Dec 21 15:08:10.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:08:10.808: INFO: namespace emptydir-wrapper-5641 deletion completed in 6.172986538s

• [SLOW TEST:14.519 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:08:10.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-2b1c40c3-26ad-4689-a68b-0326d12a49b3
STEP: Creating a pod to test consume configMaps
Dec 21 15:08:11.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff" in namespace "configmap-619" to be "success or failure"
Dec 21 15:08:11.137: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff": Phase="Pending", Reason="", readiness=false. Elapsed: 130.744475ms
Dec 21 15:08:13.143: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137041962s
Dec 21 15:08:15.149: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143371713s
Dec 21 15:08:17.160: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154228451s
Dec 21 15:08:19.176: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff": Phase="Running", Reason="", readiness=true. Elapsed: 8.169776183s
Dec 21 15:08:21.189: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1826234s
STEP: Saw pod success
Dec 21 15:08:21.189: INFO: Pod "pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff" satisfied condition "success or failure"
Dec 21 15:08:21.192: INFO: Trying to get logs from node iruya-node pod pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff container configmap-volume-test: 
STEP: delete the pod
Dec 21 15:08:21.520: INFO: Waiting for pod pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff to disappear
Dec 21 15:08:21.544: INFO: Pod pod-configmaps-43f4e7eb-dfd1-445a-ab64-1d687f3e6aff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:08:21.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-619" for this suite.
Dec 21 15:08:27.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:08:27.735: INFO: namespace configmap-619 deletion completed in 6.184703751s

• [SLOW TEST:16.927 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:08:27.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 21 15:08:27.909: INFO: Waiting up to 5m0s for pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2" in namespace "downward-api-2753" to be "success or failure"
Dec 21 15:08:27.914: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599516ms
Dec 21 15:08:29.921: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012190671s
Dec 21 15:08:31.927: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01771472s
Dec 21 15:08:33.934: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024517975s
Dec 21 15:08:35.941: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031371223s
Dec 21 15:08:37.948: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038726516s
Dec 21 15:08:39.954: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.044934572s
STEP: Saw pod success
Dec 21 15:08:39.954: INFO: Pod "downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2" satisfied condition "success or failure"
Dec 21 15:08:39.957: INFO: Trying to get logs from node iruya-node pod downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2 container dapi-container: 
STEP: delete the pod
Dec 21 15:08:40.020: INFO: Waiting for pod downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2 to disappear
Dec 21 15:08:40.042: INFO: Pod downward-api-81d02285-5c2b-4209-abb3-b33d76b567e2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:08:40.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2753" for this suite.
Dec 21 15:08:46.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:08:46.231: INFO: namespace downward-api-2753 deletion completed in 6.18425498s

• [SLOW TEST:18.495 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:08:46.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 21 15:08:46.425: INFO: Number of nodes with available pods: 0
Dec 21 15:08:46.425: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:47.471: INFO: Number of nodes with available pods: 0
Dec 21 15:08:47.471: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:48.451: INFO: Number of nodes with available pods: 0
Dec 21 15:08:48.451: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:49.444: INFO: Number of nodes with available pods: 0
Dec 21 15:08:49.444: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:50.479: INFO: Number of nodes with available pods: 0
Dec 21 15:08:50.479: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:51.445: INFO: Number of nodes with available pods: 0
Dec 21 15:08:51.445: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:53.444: INFO: Number of nodes with available pods: 0
Dec 21 15:08:53.444: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:54.442: INFO: Number of nodes with available pods: 0
Dec 21 15:08:54.442: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:55.438: INFO: Number of nodes with available pods: 0
Dec 21 15:08:55.438: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:56.522: INFO: Number of nodes with available pods: 0
Dec 21 15:08:56.522: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:57.439: INFO: Number of nodes with available pods: 1
Dec 21 15:08:57.439: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 21 15:08:58.449: INFO: Number of nodes with available pods: 2
Dec 21 15:08:58.449: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 21 15:08:58.507: INFO: Number of nodes with available pods: 1
Dec 21 15:08:58.508: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:08:59.521: INFO: Number of nodes with available pods: 1
Dec 21 15:08:59.521: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:00.528: INFO: Number of nodes with available pods: 1
Dec 21 15:09:00.528: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:01.569: INFO: Number of nodes with available pods: 1
Dec 21 15:09:01.569: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:02.536: INFO: Number of nodes with available pods: 1
Dec 21 15:09:02.536: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:03.522: INFO: Number of nodes with available pods: 1
Dec 21 15:09:03.522: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:04.531: INFO: Number of nodes with available pods: 1
Dec 21 15:09:04.531: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:05.523: INFO: Number of nodes with available pods: 1
Dec 21 15:09:05.523: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:06.529: INFO: Number of nodes with available pods: 1
Dec 21 15:09:06.529: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:07.527: INFO: Number of nodes with available pods: 1
Dec 21 15:09:07.527: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:08.528: INFO: Number of nodes with available pods: 1
Dec 21 15:09:08.528: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:09.530: INFO: Number of nodes with available pods: 1
Dec 21 15:09:09.530: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:10.550: INFO: Number of nodes with available pods: 1
Dec 21 15:09:10.550: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:11.529: INFO: Number of nodes with available pods: 1
Dec 21 15:09:11.529: INFO: Node iruya-node is running more than one daemon pod
Dec 21 15:09:12.536: INFO: Number of nodes with available pods: 2
Dec 21 15:09:12.536: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6671, will wait for the garbage collector to delete the pods
Dec 21 15:09:12.614: INFO: Deleting DaemonSet.extensions daemon-set took: 18.680783ms
Dec 21 15:09:13.015: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.262461ms
Dec 21 15:09:27.922: INFO: Number of nodes with available pods: 0
Dec 21 15:09:27.922: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 15:09:27.925: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6671/daemonsets","resourceVersion":"17531449"},"items":null}

Dec 21 15:09:27.928: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6671/pods","resourceVersion":"17531449"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:09:27.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6671" for this suite.
Dec 21 15:09:35.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:09:36.090: INFO: namespace daemonsets-6671 deletion completed in 8.147435619s

• [SLOW TEST:49.857 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:09:36.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-fdb050f2-95ba-45c1-b06f-ff1dc6849c50
STEP: Creating a pod to test consume secrets
Dec 21 15:09:36.249: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d" in namespace "projected-5983" to be "success or failure"
Dec 21 15:09:36.266: INFO: Pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.210104ms
Dec 21 15:09:38.278: INFO: Pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028336646s
Dec 21 15:09:40.298: INFO: Pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048346938s
Dec 21 15:09:42.306: INFO: Pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056907782s
Dec 21 15:09:44.347: INFO: Pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098068149s
STEP: Saw pod success
Dec 21 15:09:44.347: INFO: Pod "pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d" satisfied condition "success or failure"
Dec 21 15:09:44.352: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 15:09:44.473: INFO: Waiting for pod pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d to disappear
Dec 21 15:09:44.482: INFO: Pod pod-projected-secrets-d6180269-df8e-485d-9d4c-231c1fcbd43d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:09:44.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5983" for this suite.
Dec 21 15:09:50.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:09:50.690: INFO: namespace projected-5983 deletion completed in 6.202907249s

• [SLOW TEST:14.599 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:09:50.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 21 15:10:08.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 15:10:08.995: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 15:10:10.995: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 15:10:11.002: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 15:10:12.995: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 15:10:13.003: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 15:10:14.995: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 15:10:15.001: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 15:10:16.995: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 15:10:17.006: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:10:17.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6302" for this suite.
Dec 21 15:10:39.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:10:39.231: INFO: namespace container-lifecycle-hook-6302 deletion completed in 22.216168714s

• [SLOW TEST:48.540 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 21 15:10:39.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 21 15:10:39.378: INFO: Waiting up to 5m0s for pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb" in namespace "downward-api-6762" to be "success or failure"
Dec 21 15:10:39.392: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.402716ms
Dec 21 15:10:41.406: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028301575s
Dec 21 15:10:43.414: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036675479s
Dec 21 15:10:45.423: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045552736s
Dec 21 15:10:47.431: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052854312s
Dec 21 15:10:49.440: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062518403s
STEP: Saw pod success
Dec 21 15:10:49.440: INFO: Pod "downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb" satisfied condition "success or failure"
Dec 21 15:10:49.445: INFO: Trying to get logs from node iruya-node pod downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb container dapi-container: 
STEP: delete the pod
Dec 21 15:10:49.532: INFO: Waiting for pod downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb to disappear
Dec 21 15:10:49.544: INFO: Pod downward-api-75e80611-8817-459f-a03d-c9e5ab4c26bb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 21 15:10:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6762" for this suite.
Dec 21 15:10:55.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 15:10:55.802: INFO: namespace downward-api-6762 deletion completed in 6.249775443s

• [SLOW TEST:16.571 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSDec 21 15:10:55.803: INFO: Running AfterSuite actions on all nodes
Dec 21 15:10:55.803: INFO: Running AfterSuite actions on node 1
Dec 21 15:10:55.803: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8087.031 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS