I1222 12:56:12.000775 8 e2e.go:243] Starting e2e run "7a2bb7a1-b7f7-44e5-a2e3-2b4959765b28" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577019370 - Will randomize all specs Will run 215 of 4412 specs Dec 22 12:56:12.317: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:56:12.319: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 22 12:56:12.343: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 22 12:56:12.368: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 22 12:56:12.368: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 22 12:56:12.368: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 22 12:56:12.375: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 22 12:56:12.376: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 22 12:56:12.376: INFO: e2e test version: v1.15.7 Dec 22 12:56:12.377: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:56:12.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Dec 22 12:56:12.577: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 22 12:56:12.649: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 61.608534ms)
Dec 22 12:56:12.666: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.324544ms)
Dec 22 12:56:12.709: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 42.918683ms)
Dec 22 12:56:12.719: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.635917ms)
Dec 22 12:56:12.726: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.301823ms)
Dec 22 12:56:12.732: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.008626ms)
Dec 22 12:56:12.739: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.412837ms)
Dec 22 12:56:12.745: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.95014ms)
Dec 22 12:56:12.749: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.079051ms)
Dec 22 12:56:12.757: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.040498ms)
Dec 22 12:56:12.762: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.714542ms)
Dec 22 12:56:12.767: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.239169ms)
Dec 22 12:56:12.773: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.794552ms)
Dec 22 12:56:12.777: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.888427ms)
Dec 22 12:56:12.781: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.102088ms)
Dec 22 12:56:12.785: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.477348ms)
Dec 22 12:56:12.790: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.833098ms)
Dec 22 12:56:12.793: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.879021ms)
Dec 22 12:56:12.797: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.563607ms)
Dec 22 12:56:12.801: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.824856ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:56:12.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9804" for this suite.
Dec 22 12:56:18.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:56:18.960: INFO: namespace proxy-9804 deletion completed in 6.156097911s

• [SLOW TEST:6.584 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:56:18.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-2798b649-f35d-4c61-b4a5-6b9a84b7f9eb
STEP: Creating secret with name s-test-opt-upd-5c029714-8cee-4437-8f2e-e94b59f791a9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2798b649-f35d-4c61-b4a5-6b9a84b7f9eb
STEP: Updating secret s-test-opt-upd-5c029714-8cee-4437-8f2e-e94b59f791a9
STEP: Creating secret with name s-test-opt-create-1b2e8725-b0c8-4723-b255-663b8411c778
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:56:35.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-496" for this suite.
Dec 22 12:56:59.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:56:59.832: INFO: namespace projected-496 deletion completed in 24.205342038s

• [SLOW TEST:40.871 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:56:59.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-bjxn
STEP: Creating a pod to test atomic-volume-subpath
Dec 22 12:57:00.009: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bjxn" in namespace "subpath-7955" to be "success or failure"
Dec 22 12:57:00.028: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.599858ms
Dec 22 12:57:02.040: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030854334s
Dec 22 12:57:04.061: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052092774s
Dec 22 12:57:06.073: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064078951s
Dec 22 12:57:08.080: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07062104s
Dec 22 12:57:10.085: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075806485s
Dec 22 12:57:12.094: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 12.084875064s
Dec 22 12:57:14.102: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 14.092730831s
Dec 22 12:57:16.109: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 16.100411063s
Dec 22 12:57:18.118: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 18.109228651s
Dec 22 12:57:20.131: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 20.121893984s
Dec 22 12:57:25.423: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 25.414495028s
Dec 22 12:57:27.431: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 27.422407063s
Dec 22 12:57:29.440: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 29.43091811s
Dec 22 12:57:31.472: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 31.463599561s
Dec 22 12:57:33.646: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.637555239s
STEP: Saw pod success
Dec 22 12:57:33.647: INFO: Pod "pod-subpath-test-configmap-bjxn" satisfied condition "success or failure"
Dec 22 12:57:33.651: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-bjxn container test-container-subpath-configmap-bjxn: 
STEP: delete the pod
Dec 22 12:57:33.773: INFO: Waiting for pod pod-subpath-test-configmap-bjxn to disappear
Dec 22 12:57:33.780: INFO: Pod pod-subpath-test-configmap-bjxn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bjxn
Dec 22 12:57:33.780: INFO: Deleting pod "pod-subpath-test-configmap-bjxn" in namespace "subpath-7955"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:57:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7955" for this suite.
Dec 22 12:57:39.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:57:39.973: INFO: namespace subpath-7955 deletion completed in 6.181633578s

• [SLOW TEST:40.141 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:57:39.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:57:40.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53" in namespace "downward-api-444" to be "success or failure"
Dec 22 12:57:40.220: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 51.693487ms
Dec 22 12:57:42.229: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060125749s
Dec 22 12:57:44.247: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078112755s
Dec 22 12:57:46.255: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08619507s
Dec 22 12:57:48.264: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095763235s
Dec 22 12:57:50.273: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104820429s
STEP: Saw pod success
Dec 22 12:57:50.273: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53" satisfied condition "success or failure"
Dec 22 12:57:50.278: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53 container client-container: 
STEP: delete the pod
Dec 22 12:57:50.381: INFO: Waiting for pod downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53 to disappear
Dec 22 12:57:50.392: INFO: Pod downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:57:50.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-444" for this suite.
Dec 22 12:57:56.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:57:56.517: INFO: namespace downward-api-444 deletion completed in 6.116340589s

• [SLOW TEST:16.544 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:57:56.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:57:56.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb" in namespace "projected-4522" to be "success or failure"
Dec 22 12:57:56.675: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.022015ms
Dec 22 12:57:58.682: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02213503s
Dec 22 12:58:00.694: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033655161s
Dec 22 12:58:02.700: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040427426s
Dec 22 12:58:05.654: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.994101265s
Dec 22 12:58:07.673: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.012899845s
Dec 22 12:58:09.686: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.025938363s
STEP: Saw pod success
Dec 22 12:58:09.686: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb" satisfied condition "success or failure"
Dec 22 12:58:09.696: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb container client-container: 
STEP: delete the pod
Dec 22 12:58:09.922: INFO: Waiting for pod downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb to disappear
Dec 22 12:58:10.056: INFO: Pod downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:58:10.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4522" for this suite.
Dec 22 12:58:16.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:58:16.253: INFO: namespace projected-4522 deletion completed in 6.160428696s

• [SLOW TEST:19.735 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:58:16.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 22 12:58:16.389: INFO: Waiting up to 5m0s for pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160" in namespace "emptydir-2338" to be "success or failure"
Dec 22 12:58:16.399: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152654ms
Dec 22 12:58:18.407: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018155077s
Dec 22 12:58:20.494: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104649349s
Dec 22 12:58:22.504: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114259897s
Dec 22 12:58:24.513: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124165788s
Dec 22 12:58:26.523: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 10.133310243s
Dec 22 12:58:28.538: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.148391657s
STEP: Saw pod success
Dec 22 12:58:28.538: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160" satisfied condition "success or failure"
Dec 22 12:58:28.545: INFO: Trying to get logs from node iruya-node pod pod-9c677681-08ba-4864-9da3-dc4fbe5ae160 container test-container: 
STEP: delete the pod
Dec 22 12:58:28.849: INFO: Waiting for pod pod-9c677681-08ba-4864-9da3-dc4fbe5ae160 to disappear
Dec 22 12:58:28.935: INFO: Pod pod-9c677681-08ba-4864-9da3-dc4fbe5ae160 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:58:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2338" for this suite.
Dec 22 12:58:34.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:58:35.062: INFO: namespace emptydir-2338 deletion completed in 6.119262614s

• [SLOW TEST:18.809 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:58:35.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 22 12:58:57.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:57.353: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:57.705: INFO: Exec stderr: ""
Dec 22 12:58:57.705: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:57.706: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:58.148: INFO: Exec stderr: ""
Dec 22 12:58:58.148: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:58.148: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:58.551: INFO: Exec stderr: ""
Dec 22 12:58:58.551: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:58.551: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:58.803: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 22 12:58:58.803: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:58.803: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:59.039: INFO: Exec stderr: ""
Dec 22 12:58:59.039: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:59.039: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:59.294: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 22 12:58:59.294: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:59.294: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:59.554: INFO: Exec stderr: ""
Dec 22 12:58:59.554: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:59.554: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:58:59.893: INFO: Exec stderr: ""
Dec 22 12:58:59.893: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:58:59.893: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:59:00.192: INFO: Exec stderr: ""
Dec 22 12:59:00.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:59:00.192: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:59:00.485: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 12:59:00.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4024" for this suite.
Dec 22 12:59:52.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:59:52.666: INFO: namespace e2e-kubelet-etc-hosts-4024 deletion completed in 52.166974054s

• [SLOW TEST:77.604 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 12:59:52.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 22 12:59:52.811: INFO: Waiting up to 5m0s for pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33" in namespace "containers-7837" to be "success or failure"
Dec 22 12:59:52.864: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 52.401959ms
Dec 22 12:59:54.880: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068937937s
Dec 22 12:59:56.891: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079904642s
Dec 22 12:59:58.906: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094451318s
Dec 22 13:00:00.919: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10727008s
Dec 22 13:00:02.978: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166159824s
Dec 22 13:00:04.997: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.185231934s
STEP: Saw pod success
Dec 22 13:00:04.997: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33" satisfied condition "success or failure"
Dec 22 13:00:05.005: INFO: Trying to get logs from node iruya-node pod client-containers-f87dfc08-d09a-473e-b90a-25896981af33 container test-container: 
STEP: delete the pod
Dec 22 13:00:05.112: INFO: Waiting for pod client-containers-f87dfc08-d09a-473e-b90a-25896981af33 to disappear
Dec 22 13:00:05.184: INFO: Pod client-containers-f87dfc08-d09a-473e-b90a-25896981af33 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:00:05.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7837" for this suite.
Dec 22 13:00:11.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:00:11.441: INFO: namespace containers-7837 deletion completed in 6.24798599s

• [SLOW TEST:18.774 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:00:11.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:00:21.692: INFO: Waiting up to 5m0s for pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac" in namespace "pods-2392" to be "success or failure"
Dec 22 13:00:21.714: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 22.321946ms
Dec 22 13:00:23.721: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029412276s
Dec 22 13:00:25.750: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05786698s
Dec 22 13:00:27.757: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065193407s
Dec 22 13:00:29.765: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073325961s
Dec 22 13:00:31.780: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088483915s
STEP: Saw pod success
Dec 22 13:00:31.780: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac" satisfied condition "success or failure"
Dec 22 13:00:31.794: INFO: Trying to get logs from node iruya-node pod client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac container env3cont: 
STEP: delete the pod
Dec 22 13:00:31.988: INFO: Waiting for pod client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac to disappear
Dec 22 13:00:31.994: INFO: Pod client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:00:31.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2392" for this suite.
Dec 22 13:01:14.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:01:14.286: INFO: namespace pods-2392 deletion completed in 42.279315834s

• [SLOW TEST:62.845 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:01:14.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:01:26.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-308" for this suite.
Dec 22 13:02:18.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:02:18.706: INFO: namespace kubelet-test-308 deletion completed in 52.167505744s

• [SLOW TEST:64.420 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:02:18.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9fc9bd93-529e-44cc-84c0-cd5d84ad6a52
STEP: Creating a pod to test consume configMaps
Dec 22 13:02:18.872: INFO: Waiting up to 5m0s for pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362" in namespace "configmap-1975" to be "success or failure"
Dec 22 13:02:19.007: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 135.287324ms
Dec 22 13:02:21.013: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140993735s
Dec 22 13:02:23.021: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149107897s
Dec 22 13:02:25.027: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155310991s
Dec 22 13:02:27.071: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198432335s
Dec 22 13:02:29.080: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207581015s
Dec 22 13:02:31.085: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.213098937s
STEP: Saw pod success
Dec 22 13:02:31.085: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362" satisfied condition "success or failure"
Dec 22 13:02:31.088: INFO: Trying to get logs from node iruya-node pod pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362 container configmap-volume-test: 
STEP: delete the pod
Dec 22 13:02:31.552: INFO: Waiting for pod pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362 to disappear
Dec 22 13:02:31.571: INFO: Pod pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:02:31.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1975" for this suite.
Dec 22 13:02:37.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:02:38.055: INFO: namespace configmap-1975 deletion completed in 6.47559887s

• [SLOW TEST:19.349 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:02:38.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 22 13:02:38.122: INFO: Waiting up to 5m0s for pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc" in namespace "emptydir-6157" to be "success or failure"
Dec 22 13:02:38.275: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 152.879103ms
Dec 22 13:02:40.282: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160523748s
Dec 22 13:02:42.290: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168197113s
Dec 22 13:02:44.296: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174554731s
Dec 22 13:02:46.304: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181899675s
Dec 22 13:02:48.563: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.441050342s
STEP: Saw pod success
Dec 22 13:02:48.563: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc" satisfied condition "success or failure"
Dec 22 13:02:48.569: INFO: Trying to get logs from node iruya-node pod pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc container test-container: 
STEP: delete the pod
Dec 22 13:02:48.782: INFO: Waiting for pod pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc to disappear
Dec 22 13:02:48.802: INFO: Pod pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:02:48.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6157" for this suite.
Dec 22 13:02:54.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:02:54.990: INFO: namespace emptydir-6157 deletion completed in 6.177949687s

• [SLOW TEST:16.934 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:02:54.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ee32ee52-5296-4b4b-abd3-f4534ce6d50b
STEP: Creating a pod to test consume secrets
Dec 22 13:02:55.259: INFO: Waiting up to 5m0s for pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602" in namespace "secrets-9228" to be "success or failure"
Dec 22 13:02:55.290: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 30.747789ms
Dec 22 13:02:57.304: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044712648s
Dec 22 13:02:59.312: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052584758s
Dec 22 13:03:01.318: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058193036s
Dec 22 13:03:03.327: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068026425s
Dec 22 13:03:05.335: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07558767s
STEP: Saw pod success
Dec 22 13:03:05.335: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602" satisfied condition "success or failure"
Dec 22 13:03:05.340: INFO: Trying to get logs from node iruya-node pod pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:03:05.441: INFO: Waiting for pod pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602 to disappear
Dec 22 13:03:05.458: INFO: Pod pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:03:05.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9228" for this suite.
Dec 22 13:03:11.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:11.718: INFO: namespace secrets-9228 deletion completed in 6.253881381s
STEP: Destroying namespace "secret-namespace-7949" for this suite.
Dec 22 13:03:17.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:17.952: INFO: namespace secret-namespace-7949 deletion completed in 6.234104239s

• [SLOW TEST:22.962 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:03:17.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:03:18.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb" in namespace "downward-api-3849" to be "success or failure"
Dec 22 13:03:18.125: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 43.054033ms
Dec 22 13:03:20.620: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53856126s
Dec 22 13:03:22.635: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553460108s
Dec 22 13:03:24.654: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572149531s
Dec 22 13:03:26.667: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585113965s
Dec 22 13:03:28.672: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590055442s
Dec 22 13:03:30.677: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.595405799s
STEP: Saw pod success
Dec 22 13:03:30.677: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb" satisfied condition "success or failure"
Dec 22 13:03:30.680: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb container client-container: 
STEP: delete the pod
Dec 22 13:03:31.025: INFO: Waiting for pod downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb to disappear
Dec 22 13:03:31.033: INFO: Pod downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:03:31.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3849" for this suite.
Dec 22 13:03:37.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:37.300: INFO: namespace downward-api-3849 deletion completed in 6.25565923s

• [SLOW TEST:19.348 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:03:37.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-7dc2b900-d97f-45d4-8be0-37e09d73b551
STEP: Creating a pod to test consume configMaps
Dec 22 13:03:37.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f" in namespace "projected-4534" to be "success or failure"
Dec 22 13:03:37.570: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 99.697211ms
Dec 22 13:03:39.578: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10729245s
Dec 22 13:03:41.600: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129263212s
Dec 22 13:03:43.606: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135394893s
Dec 22 13:03:45.613: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142970987s
Dec 22 13:03:47.619: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148333122s
STEP: Saw pod success
Dec 22 13:03:47.619: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f" satisfied condition "success or failure"
Dec 22 13:03:47.623: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 13:03:48.380: INFO: Waiting for pod pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f to disappear
Dec 22 13:03:48.385: INFO: Pod pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:03:48.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4534" for this suite.
Dec 22 13:03:54.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:54.592: INFO: namespace projected-4534 deletion completed in 6.201917176s

• [SLOW TEST:17.291 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:03:54.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 22 13:03:54.827: INFO: Waiting up to 5m0s for pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b" in namespace "emptydir-5078" to be "success or failure"
Dec 22 13:03:54.836: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.05209ms
Dec 22 13:03:56.841: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014047627s
Dec 22 13:03:58.851: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024337853s
Dec 22 13:04:00.864: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037602487s
Dec 22 13:04:02.877: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050732889s
Dec 22 13:04:04.885: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Running", Reason="", readiness=true. Elapsed: 10.0579374s
Dec 22 13:04:06.895: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068441751s
STEP: Saw pod success
Dec 22 13:04:06.895: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b" satisfied condition "success or failure"
Dec 22 13:04:06.903: INFO: Trying to get logs from node iruya-node pod pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b container test-container: 
STEP: delete the pod
Dec 22 13:04:06.965: INFO: Waiting for pod pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b to disappear
Dec 22 13:04:06.968: INFO: Pod pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:04:06.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5078" for this suite.
Dec 22 13:04:15.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:04:15.177: INFO: namespace emptydir-5078 deletion completed in 8.201910328s

• [SLOW TEST:20.585 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:04:15.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:04:15.336: INFO: Creating deployment "test-recreate-deployment"
Dec 22 13:04:15.368: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 22 13:04:15.376: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 22 13:04:17.515: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 22 13:04:17.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:19.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:21.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:23.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:25.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:27.528: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 22 13:04:27.576: INFO: Updating deployment test-recreate-deployment
Dec 22 13:04:27.576: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 22 13:04:28.294: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/deployments/test-recreate-deployment,UID:3c2c135c-91ed-42f9-a013-c484fff08dcc,ResourceVersion:17636558,Generation:2,CreationTimestamp:2019-12-22 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-22 13:04:28 +0000 UTC 2019-12-22 13:04:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-22 13:04:28 +0000 UTC 2019-12-22 13:04:15 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 22 13:04:28.301: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/replicasets/test-recreate-deployment-5c8c9cc69d,UID:8dc6fca4-554b-4fb4-a0bd-f59a0eb31c44,ResourceVersion:17636556,Generation:1,CreationTimestamp:2019-12-22 13:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3c2c135c-91ed-42f9-a013-c484fff08dcc 0xc001c76017 0xc001c76018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:04:28.301: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 22 13:04:28.301: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/replicasets/test-recreate-deployment-6df85df6b9,UID:b968b077-2eb1-4e0c-91e4-d3c0c3ebee01,ResourceVersion:17636545,Generation:2,CreationTimestamp:2019-12-22 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3c2c135c-91ed-42f9-a013-c484fff08dcc 0xc001c761a7 0xc001c761a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:04:28.551: INFO: Pod "test-recreate-deployment-5c8c9cc69d-lx9px" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-lx9px,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/test-recreate-deployment-5c8c9cc69d-lx9px,UID:54378101-80b5-4033-a29a-2daa5f524c25,ResourceVersion:17636557,Generation:0,CreationTimestamp:2019-12-22 13:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 8dc6fca4-554b-4fb4-a0bd-f59a0eb31c44 0xc001c76d07 0xc001c76d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5qbnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qbnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5qbnt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c76d80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c76da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 13:04:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:04:28.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4908" for this suite.
Dec 22 13:04:36.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:04:36.732: INFO: namespace deployment-4908 deletion completed in 8.159579844s

• [SLOW TEST:21.554 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:04:36.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:04:36.950: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 22 13:04:41.960: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 22 13:04:47.972: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 22 13:04:50.069: INFO: Creating deployment "test-rollover-deployment"
Dec 22 13:04:50.163: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 22 13:04:52.244: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 22 13:04:52.256: INFO: Ensure that both replica sets have 1 created replica
Dec 22 13:04:52.265: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 22 13:04:52.283: INFO: Updating deployment test-rollover-deployment
Dec 22 13:04:52.283: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 22 13:04:54.301: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 22 13:04:54.308: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 22 13:04:54.315: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:04:54.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:56.332: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:04:56.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:04:58.337: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:04:58.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:00.331: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:00.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:02.325: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:02.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:04.344: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:04.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:06.398: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:06.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:08.325: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:08.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:10.330: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:10.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:12.325: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:12.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:14.341: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 13:05:14.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:16.410: INFO: 
Dec 22 13:05:16.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616716, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:05:18.353: INFO: 
Dec 22 13:05:18.353: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 22 13:05:18.372: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/deployments/test-rollover-deployment,UID:21839bb1-c01b-4758-8d6a-797f4471a679,ResourceVersion:17636718,Generation:2,CreationTimestamp:2019-12-22 13:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-22 13:04:50 +0000 UTC 2019-12-22 13:04:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-22 13:05:16 +0000 UTC 2019-12-22 13:04:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 22 13:05:18.377: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/replicasets/test-rollover-deployment-854595fc44,UID:559a876b-8d03-4657-a28e-fcba58823508,ResourceVersion:17636706,Generation:2,CreationTimestamp:2019-12-22 13:04:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 21839bb1-c01b-4758-8d6a-797f4471a679 0xc002964547 0xc002964548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 22 13:05:18.377: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 22 13:05:18.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/replicasets/test-rollover-controller,UID:d9f85c99-753e-49c1-bf80-8c14647e58b5,ResourceVersion:17636717,Generation:2,CreationTimestamp:2019-12-22 13:04:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 21839bb1-c01b-4758-8d6a-797f4471a679 0xc00296445f 0xc002964470}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:05:18.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/replicasets/test-rollover-deployment-9b8b997cf,UID:8119d1e5-05c0-4ff9-8dc9-57a2d9a9aafe,ResourceVersion:17636665,Generation:2,CreationTimestamp:2019-12-22 13:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 21839bb1-c01b-4758-8d6a-797f4471a679 0xc002964610 0xc002964611}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:05:18.382: INFO: Pod "test-rollover-deployment-854595fc44-mlsqb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-mlsqb,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5284,SelfLink:/api/v1/namespaces/deployment-5284/pods/test-rollover-deployment-854595fc44-mlsqb,UID:acbf4f92-745e-4e9a-b86d-46bb9f05e314,ResourceVersion:17636691,Generation:0,CreationTimestamp:2019-12-22 13:04:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 559a876b-8d03-4657-a28e-fcba58823508 0xc002965247 0xc002965248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgbjq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgbjq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pgbjq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029652c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029652e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:05:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:05:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-22 13:04:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-22 13:05:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e1493aa9b3f20b7420a446cce6289a60a1c14caf4c33715b5e2a997d28756114}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:05:18.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5284" for this suite.
Dec 22 13:05:26.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:05:26.671: INFO: namespace deployment-5284 deletion completed in 8.28439563s

• [SLOW TEST:49.938 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:05:26.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:05:26.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4130" for this suite.
Dec 22 13:05:32.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:05:33.083: INFO: namespace kubelet-test-4130 deletion completed in 6.151382028s

• [SLOW TEST:6.412 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:05:33.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:05:45.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4096" for this suite.
Dec 22 13:05:51.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:05:51.518: INFO: namespace emptydir-wrapper-4096 deletion completed in 6.171275492s

• [SLOW TEST:18.434 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:05:51.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 13:06:09.660: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03)
Dec 22 13:06:09.666: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03)
Dec 22 13:06:09.670: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03)
Dec 22 13:06:09.675: INFO: Unable to read jessie_udp@PodARecord from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03)
Dec 22 13:06:09.679: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03)
Dec 22 13:06:09.679: INFO: Lookups using dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03 failed for: [wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 22 13:06:14.770: INFO: DNS probes using dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:06:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3827" for this suite.
Dec 22 13:06:21.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:06:21.175: INFO: namespace dns-3827 deletion completed in 6.221331924s

• [SLOW TEST:29.656 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:06:21.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:06:21.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98" in namespace "projected-2753" to be "success or failure"
Dec 22 13:06:21.446: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 33.920173ms
Dec 22 13:06:23.454: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04212416s
Dec 22 13:06:25.467: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055391637s
Dec 22 13:06:27.475: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063395145s
Dec 22 13:06:29.484: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072457748s
Dec 22 13:06:31.490: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07826614s
Dec 22 13:06:33.500: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088823648s
STEP: Saw pod success
Dec 22 13:06:33.501: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98" satisfied condition "success or failure"
Dec 22 13:06:33.505: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98 container client-container: 
STEP: delete the pod
Dec 22 13:06:33.640: INFO: Waiting for pod downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98 to disappear
Dec 22 13:06:33.645: INFO: Pod downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:06:33.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2753" for this suite.
Dec 22 13:06:39.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:06:39.853: INFO: namespace projected-2753 deletion completed in 6.191020239s

• [SLOW TEST:18.678 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:06:39.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-e695e822-9cf9-4d2a-923e-c2e065566b2b in namespace container-probe-271
Dec 22 13:06:47.997: INFO: Started pod busybox-e695e822-9cf9-4d2a-923e-c2e065566b2b in namespace container-probe-271
STEP: checking the pod's current state and verifying that restartCount is present
Dec 22 13:06:48.000: INFO: Initial restart count of pod busybox-e695e822-9cf9-4d2a-923e-c2e065566b2b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:10:49.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-271" for this suite.
Dec 22 13:10:55.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:10:56.032: INFO: namespace container-probe-271 deletion completed in 6.284171866s

• [SLOW TEST:256.178 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:10:56.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-26c32882-3a57-4521-a139-4235959664e8 in namespace container-probe-5751
Dec 22 13:11:08.116: INFO: Started pod busybox-26c32882-3a57-4521-a139-4235959664e8 in namespace container-probe-5751
STEP: checking the pod's current state and verifying that restartCount is present
Dec 22 13:11:08.121: INFO: Initial restart count of pod busybox-26c32882-3a57-4521-a139-4235959664e8 is 0
Dec 22 13:11:54.344: INFO: Restart count of pod container-probe-5751/busybox-26c32882-3a57-4521-a139-4235959664e8 is now 1 (46.223173408s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:11:54.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5751" for this suite.
Dec 22 13:12:00.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:12:00.603: INFO: namespace container-probe-5751 deletion completed in 6.224255204s

• [SLOW TEST:64.569 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:12:00.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 22 13:12:00.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3884 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 22 13:12:12.023: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 22 13:12:12.023: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:12:14.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3884" for this suite.
Dec 22 13:12:20.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:12:20.163: INFO: namespace kubectl-3884 deletion completed in 6.118592871s

• [SLOW TEST:19.559 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:12:20.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 22 13:12:20.330: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6445,SelfLink:/api/v1/namespaces/watch-6445/configmaps/e2e-watch-test-resource-version,UID:4aa876a6-8cae-4d09-aeff-7d3e7f7df99c,ResourceVersion:17637515,Generation:0,CreationTimestamp:2019-12-22 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 13:12:20.330: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6445,SelfLink:/api/v1/namespaces/watch-6445/configmaps/e2e-watch-test-resource-version,UID:4aa876a6-8cae-4d09-aeff-7d3e7f7df99c,ResourceVersion:17637516,Generation:0,CreationTimestamp:2019-12-22 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:12:20.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6445" for this suite.
Dec 22 13:12:26.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:12:26.504: INFO: namespace watch-6445 deletion completed in 6.167190008s

• [SLOW TEST:6.342 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:12:26.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-784ccefb-143b-4b98-8d3a-60f64f12778e
STEP: Creating a pod to test consume secrets
Dec 22 13:12:26.643: INFO: Waiting up to 5m0s for pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18" in namespace "secrets-6786" to be "success or failure"
Dec 22 13:12:26.648: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713564ms
Dec 22 13:12:28.662: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01879535s
Dec 22 13:12:30.674: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03129842s
Dec 22 13:12:32.680: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037118764s
Dec 22 13:12:34.690: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047293144s
Dec 22 13:12:36.698: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055258743s
STEP: Saw pod success
Dec 22 13:12:36.698: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18" satisfied condition "success or failure"
Dec 22 13:12:36.701: INFO: Trying to get logs from node iruya-node pod pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:12:36.822: INFO: Waiting for pod pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18 to disappear
Dec 22 13:12:36.830: INFO: Pod pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:12:36.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6786" for this suite.
Dec 22 13:12:42.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:12:43.036: INFO: namespace secrets-6786 deletion completed in 6.199910321s

• [SLOW TEST:16.531 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:12:43.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:12:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8439" for this suite.
Dec 22 13:12:49.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:12:49.280: INFO: namespace services-8439 deletion completed in 6.145911432s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.243 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:12:49.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 22 13:12:49.954: INFO: created pod pod-service-account-defaultsa
Dec 22 13:12:49.954: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 22 13:12:49.976: INFO: created pod pod-service-account-mountsa
Dec 22 13:12:49.976: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 22 13:12:50.010: INFO: created pod pod-service-account-nomountsa
Dec 22 13:12:50.010: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 22 13:12:50.040: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 22 13:12:50.040: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 22 13:12:50.151: INFO: created pod pod-service-account-mountsa-mountspec
Dec 22 13:12:50.151: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 22 13:12:50.223: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 22 13:12:50.223: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 22 13:12:50.333: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 22 13:12:50.333: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 22 13:12:50.374: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 22 13:12:50.374: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 22 13:12:50.408: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 22 13:12:50.408: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:12:50.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5114" for this suite.
Dec 22 13:13:18.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:13:18.797: INFO: namespace svcaccounts-5114 deletion completed in 28.226131324s

• [SLOW TEST:29.517 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:13:18.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 13:13:18.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1295'
Dec 22 13:13:18.981: INFO: stderr: ""
Dec 22 13:13:18.981: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 22 13:13:18.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1295'
Dec 22 13:13:26.004: INFO: stderr: ""
Dec 22 13:13:26.005: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:13:26.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1295" for this suite.
Dec 22 13:13:32.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:13:32.216: INFO: namespace kubectl-1295 deletion completed in 6.201573048s

• [SLOW TEST:13.419 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:13:32.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 22 13:13:32.303: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:13:53.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4162" for this suite.
Dec 22 13:14:15.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:14:15.514: INFO: namespace init-container-4162 deletion completed in 22.129729549s

• [SLOW TEST:43.297 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:14:15.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-16542908-963b-4d08-95e1-80e3336b761c
STEP: Creating a pod to test consume configMaps
Dec 22 13:14:15.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3" in namespace "projected-7110" to be "success or failure"
Dec 22 13:14:15.671: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 64.862635ms
Dec 22 13:14:17.677: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070840823s
Dec 22 13:14:19.714: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107841381s
Dec 22 13:14:21.828: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221571794s
Dec 22 13:14:23.938: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331296541s
Dec 22 13:14:25.947: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.340236757s
STEP: Saw pod success
Dec 22 13:14:25.947: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3" satisfied condition "success or failure"
Dec 22 13:14:25.950: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 13:14:26.064: INFO: Waiting for pod pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3 to disappear
Dec 22 13:14:26.098: INFO: Pod pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:14:26.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7110" for this suite.
Dec 22 13:14:32.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:14:32.237: INFO: namespace projected-7110 deletion completed in 6.135438166s

• [SLOW TEST:16.723 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:14:32.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-456
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-456
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-456
Dec 22 13:14:32.411: INFO: Found 0 stateful pods, waiting for 1
Dec 22 13:14:42.451: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 22 13:14:42.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 13:14:43.340: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 13:14:43.340: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 13:14:43.340: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 13:14:43.349: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 22 13:14:53.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 13:14:53.356: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 13:14:53.377: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 22 13:14:53.378: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:14:53.378: INFO: 
Dec 22 13:14:53.378: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 22 13:14:54.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991404934s
Dec 22 13:14:56.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.402015312s
Dec 22 13:14:57.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.675790972s
Dec 22 13:14:58.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.664945956s
Dec 22 13:15:00.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.640685829s
Dec 22 13:15:01.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.034786249s
Dec 22 13:15:02.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 528.214164ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-456
Dec 22 13:15:03.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:15:04.844: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 13:15:04.844: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 13:15:04.844: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 13:15:04.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:15:05.513: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 22 13:15:05.513: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 13:15:05.513: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 13:15:05.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:15:05.899: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 22 13:15:05.899: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 13:15:05.899: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 13:15:05.906: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:15:05.906: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Dec 22 13:15:15.917: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:15:15.917: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:15:15.917: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 22 13:15:15.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 13:15:16.449: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 13:15:16.449: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 13:15:16.449: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 13:15:16.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 13:15:16.812: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 13:15:16.812: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 13:15:16.812: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 13:15:16.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 13:15:17.431: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 13:15:17.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 13:15:17.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 13:15:17.431: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 13:15:17.447: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 22 13:15:27.466: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 13:15:27.466: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 13:15:27.466: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 13:15:27.516: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:27.516: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:27.516: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:27.516: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:27.516: INFO: 
Dec 22 13:15:27.516: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:29.807: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:29.807: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:29.807: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:29.807: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:29.807: INFO: 
Dec 22 13:15:29.808: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:30.822: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:30.822: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:30.823: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:30.823: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:30.823: INFO: 
Dec 22 13:15:30.823: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:31.833: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:31.833: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:31.833: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:31.833: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:31.833: INFO: 
Dec 22 13:15:31.833: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:33.364: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:33.364: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:33.365: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:33.365: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:33.365: INFO: 
Dec 22 13:15:33.365: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:34.374: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:34.375: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:34.375: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:34.375: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:34.375: INFO: 
Dec 22 13:15:34.375: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:35.386: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:35.386: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:35.387: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:35.387: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:35.387: INFO: 
Dec 22 13:15:35.387: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:36.433: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:36.434: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:36.434: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:36.434: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:36.434: INFO: 
Dec 22 13:15:36.434: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 13:15:37.443: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 22 13:15:37.443: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC  }]
Dec 22 13:15:37.443: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:37.443: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC  }]
Dec 22 13:15:37.443: INFO: 
Dec 22 13:15:37.443: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-456
Dec 22 13:15:38.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:15:38.664: INFO: rc: 1
Dec 22 13:15:38.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001c34780 exit status 1   true [0xc000709c40 0xc000709ce0 0xc000709d48] [0xc000709c40 0xc000709ce0 0xc000709d48] [0xc000709cd8 0xc000709d18] [0xba6c50 0xba6c50] 0xc001f66540 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 22 13:15:48.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:15:48.804: INFO: rc: 1
Dec 22 13:15:48.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024a4cc0 exit status 1   true [0xc00299c980 0xc00299c998 0xc00299c9b0] [0xc00299c980 0xc00299c998 0xc00299c9b0] [0xc00299c990 0xc00299c9a8] [0xba6c50 0xba6c50] 0xc001f35680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:15:58.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:15:59.003: INFO: rc: 1
Dec 22 13:15:59.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024a4db0 exit status 1   true [0xc00299c9b8 0xc00299c9d0 0xc00299c9e8] [0xc00299c9b8 0xc00299c9d0 0xc00299c9e8] [0xc00299c9c8 0xc00299c9e0] [0xba6c50 0xba6c50] 0xc001c8f3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:16:09.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:16:09.106: INFO: rc: 1
Dec 22 13:16:09.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ed0990 exit status 1   true [0xc0023642f8 0xc002364318 0xc002364330] [0xc0023642f8 0xc002364318 0xc002364330] [0xc002364310 0xc002364328] [0xba6c50 0xba6c50] 0xc0031e95c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:16:19.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:16:19.233: INFO: rc: 1
Dec 22 13:16:19.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ed0a80 exit status 1   true [0xc002364338 0xc002364350 0xc002364368] [0xc002364338 0xc002364350 0xc002364368] [0xc002364348 0xc002364360] [0xba6c50 0xba6c50] 0xc0031e9f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:16:29.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:16:29.409: INFO: rc: 1
Dec 22 13:16:29.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce090 exit status 1   true [0xc001e16068 0xc001e160d8 0xc001e16158] [0xc001e16068 0xc001e160d8 0xc001e16158] [0xc001e160d0 0xc001e16110] [0xba6c50 0xba6c50] 0xc001f35680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:16:39.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:16:39.618: INFO: rc: 1
Dec 22 13:16:39.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c090 exit status 1   true [0xc002364000 0xc002364018 0xc002364030] [0xc002364000 0xc002364018 0xc002364030] [0xc002364010 0xc002364028] [0xba6c50 0xba6c50] 0xc0022d8d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:16:49.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:16:49.740: INFO: rc: 1
Dec 22 13:16:49.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c180 exit status 1   true [0xc002364038 0xc002364050 0xc002364068] [0xc002364038 0xc002364050 0xc002364068] [0xc002364048 0xc002364060] [0xba6c50 0xba6c50] 0xc0022d98c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:16:59.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:16:59.934: INFO: rc: 1
Dec 22 13:16:59.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce150 exit status 1   true [0xc001e16188 0xc001e161a8 0xc001e16298] [0xc001e16188 0xc001e161a8 0xc001e16298] [0xc001e161a0 0xc001e16250] [0xba6c50 0xba6c50] 0xc0021fa840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:17:09.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:17:10.158: INFO: rc: 1
Dec 22 13:17:10.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c2a0 exit status 1   true [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364080 0xc002364098] [0xba6c50 0xba6c50] 0xc0031e8180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:17:20.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:17:20.315: INFO: rc: 1
Dec 22 13:17:20.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ee0090 exit status 1   true [0xc00299c000 0xc00299c018 0xc00299c030] [0xc00299c000 0xc00299c018 0xc00299c030] [0xc00299c010 0xc00299c028] [0xba6c50 0xba6c50] 0xc002e14420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:17:30.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:17:30.511: INFO: rc: 1
Dec 22 13:17:30.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c390 exit status 1   true [0xc0023640a8 0xc0023640c0 0xc0023640d8] [0xc0023640a8 0xc0023640c0 0xc0023640d8] [0xc0023640b8 0xc0023640d0] [0xba6c50 0xba6c50] 0xc0031e8660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:17:40.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:17:40.649: INFO: rc: 1
Dec 22 13:17:40.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ee0180 exit status 1   true [0xc00299c038 0xc00299c050 0xc00299c068] [0xc00299c038 0xc00299c050 0xc00299c068] [0xc00299c048 0xc00299c060] [0xba6c50 0xba6c50] 0xc002e149c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:17:50.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:17:50.807: INFO: rc: 1
Dec 22 13:17:50.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ee0240 exit status 1   true [0xc00299c070 0xc00299c088 0xc00299c0a0] [0xc00299c070 0xc00299c088 0xc00299c0a0] [0xc00299c080 0xc00299c098] [0xba6c50 0xba6c50] 0xc002e14f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:18:00.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:18:00.951: INFO: rc: 1
Dec 22 13:18:00.951: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce210 exit status 1   true [0xc001e162f8 0xc001e16340 0xc001e16388] [0xc001e162f8 0xc001e16340 0xc001e16388] [0xc001e16328 0xc001e16368] [0xba6c50 0xba6c50] 0xc0021fb500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:18:10.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:18:11.036: INFO: rc: 1
Dec 22 13:18:11.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce300 exit status 1   true [0xc001e163c8 0xc001e16440 0xc001e164c8] [0xc001e163c8 0xc001e16440 0xc001e164c8] [0xc001e16408 0xc001e164b0] [0xba6c50 0xba6c50] 0xc00246e900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:18:21.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:18:21.170: INFO: rc: 1
Dec 22 13:18:21.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00104c0c0 exit status 1   true [0xc0031da008 0xc0031da020 0xc0031da038] [0xc0031da008 0xc0031da020 0xc0031da038] [0xc0031da018 0xc0031da030] [0xba6c50 0xba6c50] 0xc002ebe1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:18:31.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:18:31.593: INFO: rc: 1
Dec 22 13:18:31.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00237a090 exit status 1   true [0xc0031da040 0xc0031da058 0xc0031da070] [0xc0031da040 0xc0031da058 0xc0031da070] [0xc0031da050 0xc0031da068] [0xba6c50 0xba6c50] 0xc0021fa840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:18:41.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:18:41.725: INFO: rc: 1
Dec 22 13:18:41.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00237a150 exit status 1   true [0xc0031da078 0xc0031da090 0xc0031da0a8] [0xc0031da078 0xc0031da090 0xc0031da0a8] [0xc0031da088 0xc0031da0a0] [0xba6c50 0xba6c50] 0xc0021fb500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:18:51.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:18:51.897: INFO: rc: 1
Dec 22 13:18:51.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00237a240 exit status 1   true [0xc0031da0b0 0xc0031da0c8 0xc0031da0e0] [0xc0031da0b0 0xc0031da0c8 0xc0031da0e0] [0xc0031da0c0 0xc0031da0d8] [0xba6c50 0xba6c50] 0xc0022d86c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:19:01.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:19:02.060: INFO: rc: 1
Dec 22 13:19:02.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c0c0 exit status 1   true [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16090 0xc001e160e8] [0xba6c50 0xba6c50] 0xc001f35680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:19:12.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:19:12.208: INFO: rc: 1
Dec 22 13:19:12.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c1e0 exit status 1   true [0xc001e16158 0xc001e161a0 0xc001e16250] [0xc001e16158 0xc001e161a0 0xc001e16250] [0xc001e16198 0xc001e16218] [0xba6c50 0xba6c50] 0xc002ebe4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:19:22.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:19:22.326: INFO: rc: 1
Dec 22 13:19:22.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c2d0 exit status 1   true [0xc001e16298 0xc001e16328 0xc001e16368] [0xc001e16298 0xc001e16328 0xc001e16368] [0xc001e16308 0xc001e16358] [0xba6c50 0xba6c50] 0xc002ebe840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:19:32.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:19:32.403: INFO: rc: 1
Dec 22 13:19:32.403: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00179c3f0 exit status 1   true [0xc001e16388 0xc001e16408 0xc001e164b0] [0xc001e16388 0xc001e16408 0xc001e164b0] [0xc001e16400 0xc001e16490] [0xba6c50 0xba6c50] 0xc002ebfbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:19:42.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:19:42.655: INFO: rc: 1
Dec 22 13:19:42.655: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce180 exit status 1   true [0xc002364000 0xc002364018 0xc002364030] [0xc002364000 0xc002364018 0xc002364030] [0xc002364010 0xc002364028] [0xba6c50 0xba6c50] 0xc00246eba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:19:52.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:19:52.802: INFO: rc: 1
Dec 22 13:19:52.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce270 exit status 1   true [0xc002364038 0xc002364050 0xc002364068] [0xc002364038 0xc002364050 0xc002364068] [0xc002364048 0xc002364060] [0xba6c50 0xba6c50] 0xc00246f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:20:02.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:20:02.935: INFO: rc: 1
Dec 22 13:20:02.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00237a360 exit status 1   true [0xc0031da0e8 0xc0031da100 0xc0031da118] [0xc0031da0e8 0xc0031da100 0xc0031da118] [0xc0031da0f8 0xc0031da110] [0xba6c50 0xba6c50] 0xc0022d9380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:20:12.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:20:13.090: INFO: rc: 1
Dec 22 13:20:13.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce390 exit status 1   true [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364080 0xc002364098] [0xba6c50 0xba6c50] 0xc00246fce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:20:23.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:20:23.244: INFO: rc: 1
Dec 22 13:20:23.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00104c090 exit status 1   true [0xc002364008 0xc002364020 0xc002364038] [0xc002364008 0xc002364020 0xc002364038] [0xc002364018 0xc002364030] [0xba6c50 0xba6c50] 0xc001f35680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:20:33.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:20:33.354: INFO: rc: 1
Dec 22 13:20:33.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ce090 exit status 1   true [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16090 0xc001e160e8] [0xba6c50 0xba6c50] 0xc0021faa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 22 13:20:43.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 13:20:43.525: INFO: rc: 1
Dec 22 13:20:43.526: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 22 13:20:43.526: INFO: Scaling statefulset ss to 0
Dec 22 13:20:43.550: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 22 13:20:43.554: INFO: Deleting all statefulset in ns statefulset-456
Dec 22 13:20:43.557: INFO: Scaling statefulset ss to 0
Dec 22 13:20:43.566: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 13:20:43.569: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:20:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-456" for this suite.
Dec 22 13:20:49.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:20:49.785: INFO: namespace statefulset-456 deletion completed in 6.166441309s

• [SLOW TEST:377.547 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:20:49.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 22 13:20:49.858: INFO: Waiting up to 5m0s for pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff" in namespace "containers-7634" to be "success or failure"
Dec 22 13:20:49.882: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 23.788979ms
Dec 22 13:20:51.891: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03274534s
Dec 22 13:20:53.901: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042777135s
Dec 22 13:20:55.907: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049600131s
Dec 22 13:20:57.917: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058768071s
Dec 22 13:20:59.926: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068472988s
STEP: Saw pod success
Dec 22 13:20:59.926: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff" satisfied condition "success or failure"
Dec 22 13:20:59.931: INFO: Trying to get logs from node iruya-node pod client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff container test-container: 
STEP: delete the pod
Dec 22 13:21:00.034: INFO: Waiting for pod client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff to disappear
Dec 22 13:21:00.041: INFO: Pod client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:21:00.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7634" for this suite.
Dec 22 13:21:06.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:21:06.161: INFO: namespace containers-7634 deletion completed in 6.115935308s

• [SLOW TEST:16.376 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:21:06.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-690d7fb7-623b-4042-adb6-ca60e181a8c5
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-690d7fb7-623b-4042-adb6-ca60e181a8c5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:21:20.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-30" for this suite.
Dec 22 13:21:36.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:21:36.702: INFO: namespace projected-30 deletion completed in 16.161262005s

• [SLOW TEST:30.541 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:21:36.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-c9a4a4bb-586f-43ac-aa3e-fd4669f4ba15
STEP: Creating a pod to test consume secrets
Dec 22 13:21:36.780: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979" in namespace "projected-4160" to be "success or failure"
Dec 22 13:21:36.857: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 77.189059ms
Dec 22 13:21:38.864: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084405939s
Dec 22 13:21:40.881: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101194867s
Dec 22 13:21:42.889: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10952102s
Dec 22 13:21:44.902: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121949357s
STEP: Saw pod success
Dec 22 13:21:44.902: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979" satisfied condition "success or failure"
Dec 22 13:21:44.908: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:21:45.179: INFO: Waiting for pod pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979 to disappear
Dec 22 13:21:45.188: INFO: Pod pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:21:45.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4160" for this suite.
Dec 22 13:21:51.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:21:51.437: INFO: namespace projected-4160 deletion completed in 6.228578498s

• [SLOW TEST:14.733 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:21:51.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-02f3fc93-70f7-44f9-be1d-93d35a37a31d
STEP: Creating a pod to test consume secrets
Dec 22 13:21:51.586: INFO: Waiting up to 5m0s for pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658" in namespace "secrets-5835" to be "success or failure"
Dec 22 13:21:51.590: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468067ms
Dec 22 13:21:53.606: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020575791s
Dec 22 13:21:55.613: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026764303s
Dec 22 13:21:57.619: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033524644s
Dec 22 13:21:59.628: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042148906s
STEP: Saw pod success
Dec 22 13:21:59.628: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658" satisfied condition "success or failure"
Dec 22 13:21:59.636: INFO: Trying to get logs from node iruya-node pod pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:21:59.731: INFO: Waiting for pod pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658 to disappear
Dec 22 13:21:59.741: INFO: Pod pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:21:59.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5835" for this suite.
Dec 22 13:22:05.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:22:05.962: INFO: namespace secrets-5835 deletion completed in 6.204264721s

• [SLOW TEST:14.524 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:22:05.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:22:06.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720" in namespace "projected-8497" to be "success or failure"
Dec 22 13:22:06.094: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.873279ms
Dec 22 13:22:08.109: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021984328s
Dec 22 13:22:10.117: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030544098s
Dec 22 13:22:12.133: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046310949s
Dec 22 13:22:14.148: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061765999s
STEP: Saw pod success
Dec 22 13:22:14.149: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720" satisfied condition "success or failure"
Dec 22 13:22:14.153: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720 container client-container: 
STEP: delete the pod
Dec 22 13:22:14.242: INFO: Waiting for pod downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720 to disappear
Dec 22 13:22:14.292: INFO: Pod downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:22:14.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8497" for this suite.
Dec 22 13:22:20.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:22:20.463: INFO: namespace projected-8497 deletion completed in 6.16218438s

• [SLOW TEST:14.501 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:22:20.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1222 13:22:25.273786       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 13:22:25.274: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:22:25.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7094" for this suite.
Dec 22 13:22:31.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:22:31.562: INFO: namespace gc-7094 deletion completed in 6.274024912s

• [SLOW TEST:11.099 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:22:31.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 22 13:22:40.255: INFO: Successfully updated pod "pod-update-2097fc5a-a618-4a4f-950f-3b32d5aa1047"
STEP: verifying the updated pod is in kubernetes
Dec 22 13:22:40.276: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:22:40.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7039" for this suite.
Dec 22 13:23:02.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:23:02.413: INFO: namespace pods-7039 deletion completed in 22.128985788s

• [SLOW TEST:30.850 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:23:02.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:23:02.617: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b" in namespace "projected-9887" to be "success or failure"
Dec 22 13:23:02.648: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.800961ms
Dec 22 13:23:04.663: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046182263s
Dec 22 13:23:06.673: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056339018s
Dec 22 13:23:08.685: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068420835s
Dec 22 13:23:10.694: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077425616s
STEP: Saw pod success
Dec 22 13:23:10.694: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b" satisfied condition "success or failure"
Dec 22 13:23:10.699: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b container client-container: 
STEP: delete the pod
Dec 22 13:23:10.816: INFO: Waiting for pod downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b to disappear
Dec 22 13:23:10.828: INFO: Pod downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:23:10.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9887" for this suite.
Dec 22 13:23:16.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:23:17.094: INFO: namespace projected-9887 deletion completed in 6.261329368s

• [SLOW TEST:14.681 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:23:17.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c
Dec 22 13:23:17.255: INFO: Pod name my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c: Found 0 pods out of 1
Dec 22 13:23:22.261: INFO: Pod name my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c: Found 1 pods out of 1
Dec 22 13:23:22.261: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c" are running
Dec 22 13:23:26.274: INFO: Pod "my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c-4kw89" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason: Message:}])
Dec 22 13:23:26.275: INFO: Trying to dial the pod
Dec 22 13:23:31.330: INFO: Controller my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c: Got expected result from replica 1 [my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c-4kw89]: "my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c-4kw89", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:23:31.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1422" for this suite.
Dec 22 13:23:37.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:23:37.496: INFO: namespace replication-controller-1422 deletion completed in 6.156983169s

• [SLOW TEST:20.401 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:23:37.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5be31e07-ec7c-4d7a-9bc5-a3deacf3f5b5
STEP: Creating a pod to test consume configMaps
Dec 22 13:23:38.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6" in namespace "configmap-2742" to be "success or failure"
Dec 22 13:23:38.056: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.936642ms
Dec 22 13:23:40.063: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021645556s
Dec 22 13:23:42.071: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029834291s
Dec 22 13:23:44.089: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048442879s
Dec 22 13:23:46.097: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056138302s
Dec 22 13:23:48.105: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063640305s
STEP: Saw pod success
Dec 22 13:23:48.105: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6" satisfied condition "success or failure"
Dec 22 13:23:48.110: INFO: Trying to get logs from node iruya-node pod pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6 container configmap-volume-test: 
STEP: delete the pod
Dec 22 13:23:48.455: INFO: Waiting for pod pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6 to disappear
Dec 22 13:23:48.470: INFO: Pod pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:23:48.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2742" for this suite.
Dec 22 13:23:54.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:23:54.711: INFO: namespace configmap-2742 deletion completed in 6.231488049s

• [SLOW TEST:17.216 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:23:54.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5320
I1222 13:23:54.792056       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5320, replica count: 1
I1222 13:23:55.842829       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:23:56.843162       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:23:57.843596       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:23:58.843857       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:23:59.844163       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:24:00.844533       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:24:01.844876       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:24:02.845162       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 22 13:24:03.062: INFO: Created: latency-svc-k2zdk
Dec 22 13:24:03.076: INFO: Got endpoints: latency-svc-k2zdk [130.614588ms]
Dec 22 13:24:03.238: INFO: Created: latency-svc-vblvk
Dec 22 13:24:03.257: INFO: Got endpoints: latency-svc-vblvk [180.45112ms]
Dec 22 13:24:03.310: INFO: Created: latency-svc-fkq8n
Dec 22 13:24:03.310: INFO: Got endpoints: latency-svc-fkq8n [232.847084ms]
Dec 22 13:24:03.436: INFO: Created: latency-svc-5z28t
Dec 22 13:24:03.445: INFO: Got endpoints: latency-svc-5z28t [367.232495ms]
Dec 22 13:24:03.491: INFO: Created: latency-svc-6vpkc
Dec 22 13:24:03.504: INFO: Got endpoints: latency-svc-6vpkc [426.39917ms]
Dec 22 13:24:03.621: INFO: Created: latency-svc-q58kd
Dec 22 13:24:03.643: INFO: Got endpoints: latency-svc-q58kd [565.746448ms]
Dec 22 13:24:03.845: INFO: Created: latency-svc-4v4gx
Dec 22 13:24:03.862: INFO: Got endpoints: latency-svc-4v4gx [783.949542ms]
Dec 22 13:24:03.934: INFO: Created: latency-svc-cr2rx
Dec 22 13:24:04.031: INFO: Got endpoints: latency-svc-cr2rx [953.546087ms]
Dec 22 13:24:04.079: INFO: Created: latency-svc-ps6sw
Dec 22 13:24:04.112: INFO: Got endpoints: latency-svc-ps6sw [1.034387633s]
Dec 22 13:24:04.295: INFO: Created: latency-svc-bkj7g
Dec 22 13:24:04.295: INFO: Got endpoints: latency-svc-bkj7g [1.217527412s]
Dec 22 13:24:04.356: INFO: Created: latency-svc-npjhc
Dec 22 13:24:04.368: INFO: Got endpoints: latency-svc-npjhc [1.289727861s]
Dec 22 13:24:04.493: INFO: Created: latency-svc-2lzm2
Dec 22 13:24:04.501: INFO: Got endpoints: latency-svc-2lzm2 [1.42296604s]
Dec 22 13:24:04.638: INFO: Created: latency-svc-6gs46
Dec 22 13:24:04.650: INFO: Got endpoints: latency-svc-6gs46 [1.572315558s]
Dec 22 13:24:04.816: INFO: Created: latency-svc-94l9s
Dec 22 13:24:04.824: INFO: Got endpoints: latency-svc-94l9s [1.746095282s]
Dec 22 13:24:05.018: INFO: Created: latency-svc-wzgwr
Dec 22 13:24:05.025: INFO: Got endpoints: latency-svc-wzgwr [1.946713443s]
Dec 22 13:24:05.092: INFO: Created: latency-svc-mnl46
Dec 22 13:24:05.315: INFO: Got endpoints: latency-svc-mnl46 [2.237437141s]
Dec 22 13:24:05.320: INFO: Created: latency-svc-st2sx
Dec 22 13:24:05.332: INFO: Got endpoints: latency-svc-st2sx [2.074560345s]
Dec 22 13:24:05.412: INFO: Created: latency-svc-ppp9k
Dec 22 13:24:05.487: INFO: Got endpoints: latency-svc-ppp9k [2.177088343s]
Dec 22 13:24:05.530: INFO: Created: latency-svc-jpmbb
Dec 22 13:24:05.546: INFO: Got endpoints: latency-svc-jpmbb [2.10067266s]
Dec 22 13:24:05.682: INFO: Created: latency-svc-ghhfs
Dec 22 13:24:05.682: INFO: Got endpoints: latency-svc-ghhfs [2.177863286s]
Dec 22 13:24:05.720: INFO: Created: latency-svc-4jrzl
Dec 22 13:24:05.733: INFO: Got endpoints: latency-svc-4jrzl [2.089607978s]
Dec 22 13:24:05.852: INFO: Created: latency-svc-lq9dw
Dec 22 13:24:05.880: INFO: Got endpoints: latency-svc-lq9dw [2.018108332s]
Dec 22 13:24:05.923: INFO: Created: latency-svc-skgpp
Dec 22 13:24:05.929: INFO: Got endpoints: latency-svc-skgpp [1.89710653s]
Dec 22 13:24:06.056: INFO: Created: latency-svc-dlj6n
Dec 22 13:24:06.065: INFO: Got endpoints: latency-svc-dlj6n [1.952612979s]
Dec 22 13:24:06.115: INFO: Created: latency-svc-zs82g
Dec 22 13:24:06.199: INFO: Got endpoints: latency-svc-zs82g [1.903833448s]
Dec 22 13:24:06.255: INFO: Created: latency-svc-bzrxg
Dec 22 13:24:06.277: INFO: Got endpoints: latency-svc-bzrxg [1.90961781s]
Dec 22 13:24:06.370: INFO: Created: latency-svc-dcwjn
Dec 22 13:24:06.371: INFO: Got endpoints: latency-svc-dcwjn [1.870437152s]
Dec 22 13:24:06.446: INFO: Created: latency-svc-jwgrz
Dec 22 13:24:06.450: INFO: Got endpoints: latency-svc-jwgrz [1.799736242s]
Dec 22 13:24:06.687: INFO: Created: latency-svc-gslhd
Dec 22 13:24:06.703: INFO: Got endpoints: latency-svc-gslhd [1.878762036s]
Dec 22 13:24:06.788: INFO: Created: latency-svc-8vkh5
Dec 22 13:24:06.817: INFO: Got endpoints: latency-svc-8vkh5 [1.792480258s]
Dec 22 13:24:06.864: INFO: Created: latency-svc-wpqwc
Dec 22 13:24:06.880: INFO: Got endpoints: latency-svc-wpqwc [1.564823575s]
Dec 22 13:24:06.986: INFO: Created: latency-svc-cq2f2
Dec 22 13:24:07.008: INFO: Got endpoints: latency-svc-cq2f2 [1.675666137s]
Dec 22 13:24:07.071: INFO: Created: latency-svc-tt5fp
Dec 22 13:24:07.077: INFO: Got endpoints: latency-svc-tt5fp [1.589566225s]
Dec 22 13:24:07.258: INFO: Created: latency-svc-brbrs
Dec 22 13:24:07.300: INFO: Got endpoints: latency-svc-brbrs [1.754653284s]
Dec 22 13:24:07.417: INFO: Created: latency-svc-zkcrj
Dec 22 13:24:07.418: INFO: Got endpoints: latency-svc-zkcrj [1.735401457s]
Dec 22 13:24:07.552: INFO: Created: latency-svc-w6v4t
Dec 22 13:24:07.553: INFO: Got endpoints: latency-svc-w6v4t [1.819548303s]
Dec 22 13:24:07.639: INFO: Created: latency-svc-cgsg8
Dec 22 13:24:07.639: INFO: Got endpoints: latency-svc-cgsg8 [1.759309535s]
Dec 22 13:24:07.719: INFO: Created: latency-svc-qg744
Dec 22 13:24:07.731: INFO: Got endpoints: latency-svc-qg744 [1.801803738s]
Dec 22 13:24:07.784: INFO: Created: latency-svc-46g24
Dec 22 13:24:07.846: INFO: Got endpoints: latency-svc-46g24 [1.780544578s]
Dec 22 13:24:07.900: INFO: Created: latency-svc-vf2wf
Dec 22 13:24:07.905: INFO: Got endpoints: latency-svc-vf2wf [1.705322737s]
Dec 22 13:24:08.000: INFO: Created: latency-svc-hp52t
Dec 22 13:24:08.010: INFO: Got endpoints: latency-svc-hp52t [1.732426525s]
Dec 22 13:24:08.083: INFO: Created: latency-svc-6wc7d
Dec 22 13:24:08.168: INFO: Got endpoints: latency-svc-6wc7d [1.796873985s]
Dec 22 13:24:08.236: INFO: Created: latency-svc-qwffw
Dec 22 13:24:08.238: INFO: Got endpoints: latency-svc-qwffw [1.787458295s]
Dec 22 13:24:08.839: INFO: Created: latency-svc-z7gkx
Dec 22 13:24:08.843: INFO: Got endpoints: latency-svc-z7gkx [2.139947014s]
Dec 22 13:24:08.962: INFO: Created: latency-svc-2x4lf
Dec 22 13:24:08.965: INFO: Got endpoints: latency-svc-2x4lf [2.147476175s]
Dec 22 13:24:09.015: INFO: Created: latency-svc-4rdnt
Dec 22 13:24:09.028: INFO: Got endpoints: latency-svc-4rdnt [2.148113992s]
Dec 22 13:24:09.159: INFO: Created: latency-svc-jfnbd
Dec 22 13:24:09.188: INFO: Got endpoints: latency-svc-jfnbd [2.180291324s]
Dec 22 13:24:09.195: INFO: Created: latency-svc-jmg9r
Dec 22 13:24:09.214: INFO: Got endpoints: latency-svc-jmg9r [2.136810934s]
Dec 22 13:24:09.305: INFO: Created: latency-svc-p68bs
Dec 22 13:24:09.316: INFO: Got endpoints: latency-svc-p68bs [2.015928196s]
Dec 22 13:24:09.413: INFO: Created: latency-svc-bb885
Dec 22 13:24:09.666: INFO: Got endpoints: latency-svc-bb885 [2.248450172s]
Dec 22 13:24:09.703: INFO: Created: latency-svc-brh2d
Dec 22 13:24:09.711: INFO: Got endpoints: latency-svc-brh2d [2.15862688s]
Dec 22 13:24:09.860: INFO: Created: latency-svc-jc8b6
Dec 22 13:24:09.918: INFO: Got endpoints: latency-svc-jc8b6 [2.278913481s]
Dec 22 13:24:09.921: INFO: Created: latency-svc-xdcjq
Dec 22 13:24:09.934: INFO: Got endpoints: latency-svc-xdcjq [2.202945314s]
Dec 22 13:24:10.040: INFO: Created: latency-svc-swddp
Dec 22 13:24:10.058: INFO: Got endpoints: latency-svc-swddp [2.211718462s]
Dec 22 13:24:10.107: INFO: Created: latency-svc-vm6w6
Dec 22 13:24:10.107: INFO: Got endpoints: latency-svc-vm6w6 [2.20258625s]
Dec 22 13:24:10.231: INFO: Created: latency-svc-ll5f5
Dec 22 13:24:10.239: INFO: Got endpoints: latency-svc-ll5f5 [2.229262438s]
Dec 22 13:24:10.410: INFO: Created: latency-svc-smfgt
Dec 22 13:24:10.410: INFO: Got endpoints: latency-svc-smfgt [2.241103367s]
Dec 22 13:24:10.479: INFO: Created: latency-svc-brxdt
Dec 22 13:24:10.663: INFO: Got endpoints: latency-svc-brxdt [2.425530257s]
Dec 22 13:24:10.691: INFO: Created: latency-svc-drt9c
Dec 22 13:24:10.694: INFO: Got endpoints: latency-svc-drt9c [1.850756912s]
Dec 22 13:24:10.748: INFO: Created: latency-svc-jkgkg
Dec 22 13:24:10.751: INFO: Got endpoints: latency-svc-jkgkg [1.78626041s]
Dec 22 13:24:10.862: INFO: Created: latency-svc-wbl4l
Dec 22 13:24:10.877: INFO: Got endpoints: latency-svc-wbl4l [1.84837421s]
Dec 22 13:24:10.911: INFO: Created: latency-svc-cjnvz
Dec 22 13:24:10.929: INFO: Got endpoints: latency-svc-cjnvz [1.741052751s]
Dec 22 13:24:11.019: INFO: Created: latency-svc-ppx7x
Dec 22 13:24:11.024: INFO: Got endpoints: latency-svc-ppx7x [1.810231293s]
Dec 22 13:24:11.084: INFO: Created: latency-svc-z5xh6
Dec 22 13:24:11.197: INFO: Got endpoints: latency-svc-z5xh6 [1.880674688s]
Dec 22 13:24:11.220: INFO: Created: latency-svc-65547
Dec 22 13:24:11.238: INFO: Got endpoints: latency-svc-65547 [1.571290876s]
Dec 22 13:24:11.327: INFO: Created: latency-svc-s7blp
Dec 22 13:24:11.327: INFO: Got endpoints: latency-svc-s7blp [1.615746571s]
Dec 22 13:24:11.438: INFO: Created: latency-svc-j8nbg
Dec 22 13:24:11.458: INFO: Got endpoints: latency-svc-j8nbg [1.539741881s]
Dec 22 13:24:11.503: INFO: Created: latency-svc-29s4x
Dec 22 13:24:11.574: INFO: Got endpoints: latency-svc-29s4x [1.639964083s]
Dec 22 13:24:11.604: INFO: Created: latency-svc-5x8qp
Dec 22 13:24:11.604: INFO: Got endpoints: latency-svc-5x8qp [1.546372598s]
Dec 22 13:24:11.670: INFO: Created: latency-svc-d6959
Dec 22 13:24:11.676: INFO: Got endpoints: latency-svc-d6959 [1.568794784s]
Dec 22 13:24:11.827: INFO: Created: latency-svc-7vm6v
Dec 22 13:24:11.827: INFO: Got endpoints: latency-svc-7vm6v [1.588029367s]
Dec 22 13:24:11.870: INFO: Created: latency-svc-5p6qp
Dec 22 13:24:11.930: INFO: Got endpoints: latency-svc-5p6qp [1.519989027s]
Dec 22 13:24:12.036: INFO: Created: latency-svc-vdcv4
Dec 22 13:24:12.090: INFO: Got endpoints: latency-svc-vdcv4 [1.426089626s]
Dec 22 13:24:12.128: INFO: Created: latency-svc-nkjpj
Dec 22 13:24:12.170: INFO: Got endpoints: latency-svc-nkjpj [1.475557524s]
Dec 22 13:24:12.187: INFO: Created: latency-svc-2gpft
Dec 22 13:24:12.188: INFO: Got endpoints: latency-svc-2gpft [1.436547692s]
Dec 22 13:24:12.337: INFO: Created: latency-svc-4q4lc
Dec 22 13:24:12.355: INFO: Got endpoints: latency-svc-4q4lc [1.478050178s]
Dec 22 13:24:12.388: INFO: Created: latency-svc-fm9b5
Dec 22 13:24:12.396: INFO: Got endpoints: latency-svc-fm9b5 [1.466011948s]
Dec 22 13:24:12.560: INFO: Created: latency-svc-czgvc
Dec 22 13:24:12.560: INFO: Got endpoints: latency-svc-czgvc [1.535266761s]
Dec 22 13:24:12.630: INFO: Created: latency-svc-5656t
Dec 22 13:24:12.686: INFO: Got endpoints: latency-svc-5656t [1.489203392s]
Dec 22 13:24:12.748: INFO: Created: latency-svc-2gxcc
Dec 22 13:24:12.748: INFO: Got endpoints: latency-svc-2gxcc [1.510605152s]
Dec 22 13:24:12.872: INFO: Created: latency-svc-szgx4
Dec 22 13:24:12.884: INFO: Got endpoints: latency-svc-szgx4 [197.87474ms]
Dec 22 13:24:12.960: INFO: Created: latency-svc-wzfr5
Dec 22 13:24:13.031: INFO: Got endpoints: latency-svc-wzfr5 [1.703360361s]
Dec 22 13:24:13.201: INFO: Created: latency-svc-zd7m4
Dec 22 13:24:13.206: INFO: Got endpoints: latency-svc-zd7m4 [1.747595499s]
Dec 22 13:24:13.290: INFO: Created: latency-svc-pprzg
Dec 22 13:24:13.346: INFO: Got endpoints: latency-svc-pprzg [1.772229973s]
Dec 22 13:24:13.385: INFO: Created: latency-svc-b48r6
Dec 22 13:24:13.397: INFO: Got endpoints: latency-svc-b48r6 [1.792951409s]
Dec 22 13:24:13.510: INFO: Created: latency-svc-w76zk
Dec 22 13:24:13.520: INFO: Got endpoints: latency-svc-w76zk [1.844010955s]
Dec 22 13:24:13.620: INFO: Created: latency-svc-7swh9
Dec 22 13:24:13.715: INFO: Got endpoints: latency-svc-7swh9 [1.887616369s]
Dec 22 13:24:13.724: INFO: Created: latency-svc-wtbg2
Dec 22 13:24:13.747: INFO: Got endpoints: latency-svc-wtbg2 [1.817084995s]
Dec 22 13:24:13.806: INFO: Created: latency-svc-rsx66
Dec 22 13:24:13.894: INFO: Got endpoints: latency-svc-rsx66 [1.804549533s]
Dec 22 13:24:13.934: INFO: Created: latency-svc-hnnz4
Dec 22 13:24:13.976: INFO: Got endpoints: latency-svc-hnnz4 [1.805511899s]
Dec 22 13:24:14.116: INFO: Created: latency-svc-xn4gw
Dec 22 13:24:14.131: INFO: Got endpoints: latency-svc-xn4gw [1.943260641s]
Dec 22 13:24:14.311: INFO: Created: latency-svc-fcgjg
Dec 22 13:24:14.346: INFO: Got endpoints: latency-svc-fcgjg [1.990468475s]
Dec 22 13:24:14.347: INFO: Created: latency-svc-bvzvj
Dec 22 13:24:14.360: INFO: Got endpoints: latency-svc-bvzvj [1.96387866s]
Dec 22 13:24:14.399: INFO: Created: latency-svc-7qftd
Dec 22 13:24:14.529: INFO: Got endpoints: latency-svc-7qftd [1.969314349s]
Dec 22 13:24:14.563: INFO: Created: latency-svc-5798x
Dec 22 13:24:14.563: INFO: Got endpoints: latency-svc-5798x [1.814846908s]
Dec 22 13:24:14.619: INFO: Created: latency-svc-4knrg
Dec 22 13:24:14.753: INFO: Got endpoints: latency-svc-4knrg [1.868158682s]
Dec 22 13:24:14.772: INFO: Created: latency-svc-llgfp
Dec 22 13:24:14.793: INFO: Got endpoints: latency-svc-llgfp [1.76250716s]
Dec 22 13:24:14.831: INFO: Created: latency-svc-t4b7f
Dec 22 13:24:14.841: INFO: Got endpoints: latency-svc-t4b7f [1.634879537s]
Dec 22 13:24:14.952: INFO: Created: latency-svc-mjjsc
Dec 22 13:24:14.962: INFO: Got endpoints: latency-svc-mjjsc [1.615068024s]
Dec 22 13:24:14.995: INFO: Created: latency-svc-kvqvm
Dec 22 13:24:15.007: INFO: Got endpoints: latency-svc-kvqvm [1.609188043s]
Dec 22 13:24:15.125: INFO: Created: latency-svc-brlmg
Dec 22 13:24:15.140: INFO: Got endpoints: latency-svc-brlmg [1.619484088s]
Dec 22 13:24:15.235: INFO: Created: latency-svc-wcs52
Dec 22 13:24:15.334: INFO: Got endpoints: latency-svc-wcs52 [1.619035014s]
Dec 22 13:24:15.381: INFO: Created: latency-svc-7qkkr
Dec 22 13:24:15.393: INFO: Got endpoints: latency-svc-7qkkr [1.646289793s]
Dec 22 13:24:15.450: INFO: Created: latency-svc-5tm9r
Dec 22 13:24:15.577: INFO: Got endpoints: latency-svc-5tm9r [1.682535248s]
Dec 22 13:24:15.595: INFO: Created: latency-svc-srw76
Dec 22 13:24:15.612: INFO: Got endpoints: latency-svc-srw76 [1.636699111s]
Dec 22 13:24:15.650: INFO: Created: latency-svc-2k747
Dec 22 13:24:15.665: INFO: Got endpoints: latency-svc-2k747 [1.534134171s]
Dec 22 13:24:15.808: INFO: Created: latency-svc-hzmt6
Dec 22 13:24:15.808: INFO: Got endpoints: latency-svc-hzmt6 [1.462118445s]
Dec 22 13:24:15.867: INFO: Created: latency-svc-8dxph
Dec 22 13:24:16.048: INFO: Got endpoints: latency-svc-8dxph [1.687886837s]
Dec 22 13:24:16.083: INFO: Created: latency-svc-fhdgv
Dec 22 13:24:16.159: INFO: Created: latency-svc-g5cfm
Dec 22 13:24:16.168: INFO: Got endpoints: latency-svc-fhdgv [1.638193638s]
Dec 22 13:24:16.273: INFO: Got endpoints: latency-svc-g5cfm [1.709854966s]
Dec 22 13:24:16.353: INFO: Created: latency-svc-jx6kl
Dec 22 13:24:16.672: INFO: Got endpoints: latency-svc-jx6kl [1.919330988s]
Dec 22 13:24:16.727: INFO: Created: latency-svc-gdhj9
Dec 22 13:24:16.748: INFO: Got endpoints: latency-svc-gdhj9 [1.954351841s]
Dec 22 13:24:16.964: INFO: Created: latency-svc-496c7
Dec 22 13:24:16.976: INFO: Got endpoints: latency-svc-496c7 [2.134816557s]
Dec 22 13:24:17.221: INFO: Created: latency-svc-zh9m4
Dec 22 13:24:17.287: INFO: Got endpoints: latency-svc-zh9m4 [2.325407799s]
Dec 22 13:24:17.292: INFO: Created: latency-svc-sq9mf
Dec 22 13:24:17.317: INFO: Got endpoints: latency-svc-sq9mf [2.310842376s]
Dec 22 13:24:17.438: INFO: Created: latency-svc-kx8zv
Dec 22 13:24:17.455: INFO: Got endpoints: latency-svc-kx8zv [2.315481249s]
Dec 22 13:24:17.715: INFO: Created: latency-svc-fvzbk
Dec 22 13:24:17.715: INFO: Got endpoints: latency-svc-fvzbk [2.380256502s]
Dec 22 13:24:17.796: INFO: Created: latency-svc-22fhr
Dec 22 13:24:17.906: INFO: Got endpoints: latency-svc-22fhr [2.512952899s]
Dec 22 13:24:17.936: INFO: Created: latency-svc-lwhjq
Dec 22 13:24:17.979: INFO: Got endpoints: latency-svc-lwhjq [2.402099211s]
Dec 22 13:24:18.214: INFO: Created: latency-svc-m4rlc
Dec 22 13:24:18.221: INFO: Got endpoints: latency-svc-m4rlc [2.608121883s]
Dec 22 13:24:18.417: INFO: Created: latency-svc-snq6d
Dec 22 13:24:18.428: INFO: Got endpoints: latency-svc-snq6d [2.762765977s]
Dec 22 13:24:18.508: INFO: Created: latency-svc-6hntf
Dec 22 13:24:18.630: INFO: Got endpoints: latency-svc-6hntf [2.821972773s]
Dec 22 13:24:18.710: INFO: Created: latency-svc-68hq9
Dec 22 13:24:18.832: INFO: Got endpoints: latency-svc-68hq9 [2.784576557s]
Dec 22 13:24:18.877: INFO: Created: latency-svc-xst6v
Dec 22 13:24:18.883: INFO: Got endpoints: latency-svc-xst6v [2.715302909s]
Dec 22 13:24:18.939: INFO: Created: latency-svc-hw8nr
Dec 22 13:24:19.141: INFO: Got endpoints: latency-svc-hw8nr [2.867250578s]
Dec 22 13:24:19.158: INFO: Created: latency-svc-qpjjz
Dec 22 13:24:19.166: INFO: Got endpoints: latency-svc-qpjjz [2.494057483s]
Dec 22 13:24:19.335: INFO: Created: latency-svc-778cd
Dec 22 13:24:19.340: INFO: Got endpoints: latency-svc-778cd [2.592127097s]
Dec 22 13:24:19.414: INFO: Created: latency-svc-7km2n
Dec 22 13:24:19.421: INFO: Got endpoints: latency-svc-7km2n [2.445489186s]
Dec 22 13:24:19.933: INFO: Created: latency-svc-z4plm
Dec 22 13:24:19.933: INFO: Got endpoints: latency-svc-z4plm [2.645822798s]
Dec 22 13:24:19.995: INFO: Created: latency-svc-tdsl4
Dec 22 13:24:20.079: INFO: Got endpoints: latency-svc-tdsl4 [2.761055301s]
Dec 22 13:24:20.113: INFO: Created: latency-svc-m54tr
Dec 22 13:24:20.143: INFO: Got endpoints: latency-svc-m54tr [2.687370693s]
Dec 22 13:24:20.183: INFO: Created: latency-svc-npt8n
Dec 22 13:24:20.374: INFO: Got endpoints: latency-svc-npt8n [2.65927329s]
Dec 22 13:24:20.392: INFO: Created: latency-svc-f4cns
Dec 22 13:24:20.459: INFO: Got endpoints: latency-svc-f4cns [2.552501531s]
Dec 22 13:24:20.459: INFO: Created: latency-svc-kdrst
Dec 22 13:24:20.622: INFO: Got endpoints: latency-svc-kdrst [2.642632139s]
Dec 22 13:24:20.665: INFO: Created: latency-svc-nrff4
Dec 22 13:24:20.673: INFO: Got endpoints: latency-svc-nrff4 [2.452003733s]
Dec 22 13:24:20.911: INFO: Created: latency-svc-zn4jj
Dec 22 13:24:20.929: INFO: Got endpoints: latency-svc-zn4jj [2.500917134s]
Dec 22 13:24:21.212: INFO: Created: latency-svc-q96nf
Dec 22 13:24:21.212: INFO: Got endpoints: latency-svc-q96nf [2.581950492s]
Dec 22 13:24:21.278: INFO: Created: latency-svc-gfttn
Dec 22 13:24:21.379: INFO: Got endpoints: latency-svc-gfttn [2.54650449s]
Dec 22 13:24:21.427: INFO: Created: latency-svc-c8lzz
Dec 22 13:24:21.444: INFO: Got endpoints: latency-svc-c8lzz [2.561217606s]
Dec 22 13:24:21.619: INFO: Created: latency-svc-v6w8s
Dec 22 13:24:21.651: INFO: Got endpoints: latency-svc-v6w8s [2.510219329s]
Dec 22 13:24:21.704: INFO: Created: latency-svc-hhlpm
Dec 22 13:24:21.710: INFO: Got endpoints: latency-svc-hhlpm [2.543633175s]
Dec 22 13:24:21.831: INFO: Created: latency-svc-hjt85
Dec 22 13:24:21.834: INFO: Got endpoints: latency-svc-hjt85 [2.49404818s]
Dec 22 13:24:21.997: INFO: Created: latency-svc-nt89j
Dec 22 13:24:22.007: INFO: Got endpoints: latency-svc-nt89j [2.585817771s]
Dec 22 13:24:22.094: INFO: Created: latency-svc-vv8jb
Dec 22 13:24:22.236: INFO: Got endpoints: latency-svc-vv8jb [2.302674335s]
Dec 22 13:24:22.293: INFO: Created: latency-svc-qvlh6
Dec 22 13:24:22.394: INFO: Got endpoints: latency-svc-qvlh6 [2.314667335s]
Dec 22 13:24:22.472: INFO: Created: latency-svc-bfqqc
Dec 22 13:24:22.579: INFO: Got endpoints: latency-svc-bfqqc [2.436094847s]
Dec 22 13:24:22.620: INFO: Created: latency-svc-mmdvv
Dec 22 13:24:22.626: INFO: Got endpoints: latency-svc-mmdvv [2.251685748s]
Dec 22 13:24:22.679: INFO: Created: latency-svc-cmcsg
Dec 22 13:24:22.740: INFO: Got endpoints: latency-svc-cmcsg [2.281011907s]
Dec 22 13:24:22.807: INFO: Created: latency-svc-fflc8
Dec 22 13:24:22.973: INFO: Got endpoints: latency-svc-fflc8 [2.350461767s]
Dec 22 13:24:22.990: INFO: Created: latency-svc-jlj55
Dec 22 13:24:23.005: INFO: Got endpoints: latency-svc-jlj55 [2.331942571s]
Dec 22 13:24:23.064: INFO: Created: latency-svc-v6lvb
Dec 22 13:24:23.146: INFO: Got endpoints: latency-svc-v6lvb [2.216928659s]
Dec 22 13:24:23.198: INFO: Created: latency-svc-9t8pj
Dec 22 13:24:23.226: INFO: Got endpoints: latency-svc-9t8pj [2.01342421s]
Dec 22 13:24:23.336: INFO: Created: latency-svc-qwht6
Dec 22 13:24:23.549: INFO: Got endpoints: latency-svc-qwht6 [2.170169331s]
Dec 22 13:24:23.571: INFO: Created: latency-svc-78s42
Dec 22 13:24:23.572: INFO: Got endpoints: latency-svc-78s42 [2.127006991s]
Dec 22 13:24:23.652: INFO: Created: latency-svc-6gb2z
Dec 22 13:24:23.788: INFO: Got endpoints: latency-svc-6gb2z [2.137063167s]
Dec 22 13:24:23.877: INFO: Created: latency-svc-wxm6r
Dec 22 13:24:23.993: INFO: Got endpoints: latency-svc-wxm6r [2.282195517s]
Dec 22 13:24:24.028: INFO: Created: latency-svc-x2fsr
Dec 22 13:24:24.064: INFO: Got endpoints: latency-svc-x2fsr [2.22961349s]
Dec 22 13:24:24.177: INFO: Created: latency-svc-7sbj9
Dec 22 13:24:24.188: INFO: Got endpoints: latency-svc-7sbj9 [2.180093744s]
Dec 22 13:24:24.232: INFO: Created: latency-svc-4wr6h
Dec 22 13:24:24.246: INFO: Got endpoints: latency-svc-4wr6h [2.009813474s]
Dec 22 13:24:24.412: INFO: Created: latency-svc-5gkcf
Dec 22 13:24:24.437: INFO: Got endpoints: latency-svc-5gkcf [2.043303011s]
Dec 22 13:24:24.446: INFO: Created: latency-svc-tk4sl
Dec 22 13:24:24.448: INFO: Got endpoints: latency-svc-tk4sl [1.8688933s]
Dec 22 13:24:24.625: INFO: Created: latency-svc-2m2gc
Dec 22 13:24:24.640: INFO: Got endpoints: latency-svc-2m2gc [2.01382568s]
Dec 22 13:24:24.842: INFO: Created: latency-svc-jb64f
Dec 22 13:24:24.970: INFO: Got endpoints: latency-svc-jb64f [2.22959463s]
Dec 22 13:24:24.978: INFO: Created: latency-svc-vz9xp
Dec 22 13:24:24.984: INFO: Got endpoints: latency-svc-vz9xp [2.010344731s]
Dec 22 13:24:25.055: INFO: Created: latency-svc-hlshg
Dec 22 13:24:25.160: INFO: Got endpoints: latency-svc-hlshg [2.154706563s]
Dec 22 13:24:25.216: INFO: Created: latency-svc-twnrx
Dec 22 13:24:25.232: INFO: Got endpoints: latency-svc-twnrx [2.085245336s]
Dec 22 13:24:25.350: INFO: Created: latency-svc-bprnl
Dec 22 13:24:25.412: INFO: Got endpoints: latency-svc-bprnl [2.185826146s]
Dec 22 13:24:25.417: INFO: Created: latency-svc-wxn99
Dec 22 13:24:25.514: INFO: Got endpoints: latency-svc-wxn99 [1.964307648s]
Dec 22 13:24:25.580: INFO: Created: latency-svc-hnlx6
Dec 22 13:24:25.587: INFO: Got endpoints: latency-svc-hnlx6 [2.015347487s]
Dec 22 13:24:25.747: INFO: Created: latency-svc-qjkpx
Dec 22 13:24:25.752: INFO: Got endpoints: latency-svc-qjkpx [1.963715326s]
Dec 22 13:24:25.888: INFO: Created: latency-svc-r6zcc
Dec 22 13:24:25.891: INFO: Got endpoints: latency-svc-r6zcc [1.89779752s]
Dec 22 13:24:25.943: INFO: Created: latency-svc-2p7xn
Dec 22 13:24:25.955: INFO: Got endpoints: latency-svc-2p7xn [1.890592873s]
Dec 22 13:24:26.130: INFO: Created: latency-svc-bvf8j
Dec 22 13:24:26.167: INFO: Got endpoints: latency-svc-bvf8j [1.97943909s]
Dec 22 13:24:26.293: INFO: Created: latency-svc-22qcm
Dec 22 13:24:26.301: INFO: Got endpoints: latency-svc-22qcm [2.054655255s]
Dec 22 13:24:26.331: INFO: Created: latency-svc-6chc9
Dec 22 13:24:26.339: INFO: Got endpoints: latency-svc-6chc9 [1.90115627s]
Dec 22 13:24:26.501: INFO: Created: latency-svc-4579v
Dec 22 13:24:26.504: INFO: Got endpoints: latency-svc-4579v [2.055796146s]
Dec 22 13:24:26.595: INFO: Created: latency-svc-c9ncv
Dec 22 13:24:26.699: INFO: Got endpoints: latency-svc-c9ncv [2.059213987s]
Dec 22 13:24:26.740: INFO: Created: latency-svc-pgjkx
Dec 22 13:24:26.768: INFO: Got endpoints: latency-svc-pgjkx [1.797955379s]
Dec 22 13:24:26.788: INFO: Created: latency-svc-5mts6
Dec 22 13:24:26.788: INFO: Got endpoints: latency-svc-5mts6 [1.803464225s]
Dec 22 13:24:26.903: INFO: Created: latency-svc-j9jkg
Dec 22 13:24:26.911: INFO: Got endpoints: latency-svc-j9jkg [1.751706135s]
Dec 22 13:24:26.972: INFO: Created: latency-svc-phn5m
Dec 22 13:24:26.989: INFO: Got endpoints: latency-svc-phn5m [1.757175442s]
Dec 22 13:24:27.065: INFO: Created: latency-svc-n5xtl
Dec 22 13:24:27.115: INFO: Created: latency-svc-8rjm2
Dec 22 13:24:27.115: INFO: Got endpoints: latency-svc-n5xtl [1.703602417s]
Dec 22 13:24:27.136: INFO: Got endpoints: latency-svc-8rjm2 [1.622439587s]
Dec 22 13:24:27.265: INFO: Created: latency-svc-gncgf
Dec 22 13:24:27.273: INFO: Got endpoints: latency-svc-gncgf [1.685775349s]
Dec 22 13:24:27.324: INFO: Created: latency-svc-42kxr
Dec 22 13:24:27.361: INFO: Got endpoints: latency-svc-42kxr [1.608359072s]
Dec 22 13:24:27.435: INFO: Created: latency-svc-lkdvn
Dec 22 13:24:27.446: INFO: Got endpoints: latency-svc-lkdvn [1.555641339s]
Dec 22 13:24:27.504: INFO: Created: latency-svc-cbdd8
Dec 22 13:24:27.517: INFO: Got endpoints: latency-svc-cbdd8 [1.561801586s]
Dec 22 13:24:27.626: INFO: Created: latency-svc-tlpb9
Dec 22 13:24:27.631: INFO: Got endpoints: latency-svc-tlpb9 [1.463315191s]
Dec 22 13:24:27.687: INFO: Created: latency-svc-gxq4d
Dec 22 13:24:27.754: INFO: Got endpoints: latency-svc-gxq4d [1.452906892s]
Dec 22 13:24:27.853: INFO: Created: latency-svc-fcdrs
Dec 22 13:24:27.947: INFO: Got endpoints: latency-svc-fcdrs [1.608009439s]
Dec 22 13:24:27.952: INFO: Created: latency-svc-vx6fn
Dec 22 13:24:27.958: INFO: Got endpoints: latency-svc-vx6fn [1.454193681s]
Dec 22 13:24:28.013: INFO: Created: latency-svc-lksp5
Dec 22 13:24:28.130: INFO: Got endpoints: latency-svc-lksp5 [1.430554177s]
Dec 22 13:24:28.148: INFO: Created: latency-svc-qcr68
Dec 22 13:24:28.184: INFO: Got endpoints: latency-svc-qcr68 [1.416012228s]
Dec 22 13:24:28.191: INFO: Created: latency-svc-nl26n
Dec 22 13:24:28.196: INFO: Got endpoints: latency-svc-nl26n [1.408091804s]
Dec 22 13:24:28.302: INFO: Created: latency-svc-d2rff
Dec 22 13:24:28.303: INFO: Got endpoints: latency-svc-d2rff [1.391504104s]
Dec 22 13:24:28.347: INFO: Created: latency-svc-twmd9
Dec 22 13:24:28.369: INFO: Got endpoints: latency-svc-twmd9 [1.379639887s]
Dec 22 13:24:28.583: INFO: Created: latency-svc-hcr8k
Dec 22 13:24:28.587: INFO: Got endpoints: latency-svc-hcr8k [1.471308201s]
Dec 22 13:24:28.687: INFO: Created: latency-svc-65ckd
Dec 22 13:24:28.696: INFO: Got endpoints: latency-svc-65ckd [1.559832068s]
Dec 22 13:24:28.745: INFO: Created: latency-svc-h7996
Dec 22 13:24:28.908: INFO: Got endpoints: latency-svc-h7996 [1.634913537s]
Dec 22 13:24:28.912: INFO: Created: latency-svc-kwhjk
Dec 22 13:24:28.940: INFO: Got endpoints: latency-svc-kwhjk [1.579474307s]
Dec 22 13:24:29.122: INFO: Created: latency-svc-757hs
Dec 22 13:24:29.147: INFO: Got endpoints: latency-svc-757hs [1.700213189s]
Dec 22 13:24:29.147: INFO: Latencies: [180.45112ms 197.87474ms 232.847084ms 367.232495ms 426.39917ms 565.746448ms 783.949542ms 953.546087ms 1.034387633s 1.217527412s 1.289727861s 1.379639887s 1.391504104s 1.408091804s 1.416012228s 1.42296604s 1.426089626s 1.430554177s 1.436547692s 1.452906892s 1.454193681s 1.462118445s 1.463315191s 1.466011948s 1.471308201s 1.475557524s 1.478050178s 1.489203392s 1.510605152s 1.519989027s 1.534134171s 1.535266761s 1.539741881s 1.546372598s 1.555641339s 1.559832068s 1.561801586s 1.564823575s 1.568794784s 1.571290876s 1.572315558s 1.579474307s 1.588029367s 1.589566225s 1.608009439s 1.608359072s 1.609188043s 1.615068024s 1.615746571s 1.619035014s 1.619484088s 1.622439587s 1.634879537s 1.634913537s 1.636699111s 1.638193638s 1.639964083s 1.646289793s 1.675666137s 1.682535248s 1.685775349s 1.687886837s 1.700213189s 1.703360361s 1.703602417s 1.705322737s 1.709854966s 1.732426525s 1.735401457s 1.741052751s 1.746095282s 1.747595499s 1.751706135s 1.754653284s 1.757175442s 1.759309535s 1.76250716s 1.772229973s 1.780544578s 1.78626041s 1.787458295s 1.792480258s 1.792951409s 1.796873985s 1.797955379s 1.799736242s 1.801803738s 1.803464225s 1.804549533s 1.805511899s 1.810231293s 1.814846908s 1.817084995s 1.819548303s 1.844010955s 1.84837421s 1.850756912s 1.868158682s 1.8688933s 1.870437152s 1.878762036s 1.880674688s 1.887616369s 1.890592873s 1.89710653s 1.89779752s 1.90115627s 1.903833448s 1.90961781s 1.919330988s 1.943260641s 1.946713443s 1.952612979s 1.954351841s 1.963715326s 1.96387866s 1.964307648s 1.969314349s 1.97943909s 1.990468475s 2.009813474s 2.010344731s 2.01342421s 2.01382568s 2.015347487s 2.015928196s 2.018108332s 2.043303011s 2.054655255s 2.055796146s 2.059213987s 2.074560345s 2.085245336s 2.089607978s 2.10067266s 2.127006991s 2.134816557s 2.136810934s 2.137063167s 2.139947014s 2.147476175s 2.148113992s 2.154706563s 2.15862688s 2.170169331s 2.177088343s 2.177863286s 2.180093744s 2.180291324s 2.185826146s 2.20258625s 2.202945314s 2.211718462s 2.216928659s 2.229262438s 2.22959463s 2.22961349s 2.237437141s 2.241103367s 2.248450172s 2.251685748s 2.278913481s 2.281011907s 2.282195517s 2.302674335s 2.310842376s 2.314667335s 2.315481249s 2.325407799s 2.331942571s 2.350461767s 2.380256502s 2.402099211s 2.425530257s 2.436094847s 2.445489186s 2.452003733s 2.49404818s 2.494057483s 2.500917134s 2.510219329s 2.512952899s 2.543633175s 2.54650449s 2.552501531s 2.561217606s 2.581950492s 2.585817771s 2.592127097s 2.608121883s 2.642632139s 2.645822798s 2.65927329s 2.687370693s 2.715302909s 2.761055301s 2.762765977s 2.784576557s 2.821972773s 2.867250578s]
Dec 22 13:24:29.147: INFO: 50 %ile: 1.878762036s
Dec 22 13:24:29.147: INFO: 90 %ile: 2.510219329s
Dec 22 13:24:29.147: INFO: 99 %ile: 2.821972773s
Dec 22 13:24:29.147: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:24:29.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5320" for this suite.
Dec 22 13:25:15.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:25:15.491: INFO: namespace svc-latency-5320 deletion completed in 46.308240588s

• [SLOW TEST:80.779 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:25:15.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 22 13:25:26.201: INFO: Successfully updated pod "labelsupdate7feb7808-7b08-4b7f-bdd3-540042d34c7a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:25:28.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8484" for this suite.
Dec 22 13:25:50.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:25:50.444: INFO: namespace projected-8484 deletion completed in 22.138001879s

• [SLOW TEST:34.953 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:25:50.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-9e66b798-945e-4d2d-ab92-2172cd0e6447
STEP: Creating configMap with name cm-test-opt-upd-180f79ae-ffcb-4594-85a1-35e3f1cd5e72
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9e66b798-945e-4d2d-ab92-2172cd0e6447
STEP: Updating configmap cm-test-opt-upd-180f79ae-ffcb-4594-85a1-35e3f1cd5e72
STEP: Creating configMap with name cm-test-opt-create-540cc362-17b3-4697-963b-7addccdd46a0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:26:07.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7403" for this suite.
Dec 22 13:26:29.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:26:29.253: INFO: namespace projected-7403 deletion completed in 22.214295308s

• [SLOW TEST:38.808 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:26:29.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-f089e182-dd18-40fb-b9c8-75970e0a0e49
STEP: Creating a pod to test consume secrets
Dec 22 13:26:29.410: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94" in namespace "projected-8396" to be "success or failure"
Dec 22 13:26:29.426: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 16.146359ms
Dec 22 13:26:31.437: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027706289s
Dec 22 13:26:33.443: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033433066s
Dec 22 13:26:35.448: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038828982s
Dec 22 13:26:37.455: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045440932s
Dec 22 13:26:39.463: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053449872s
STEP: Saw pod success
Dec 22 13:26:39.463: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94" satisfied condition "success or failure"
Dec 22 13:26:39.467: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 13:26:39.933: INFO: Waiting for pod pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94 to disappear
Dec 22 13:26:39.942: INFO: Pod pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:26:39.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8396" for this suite.
Dec 22 13:26:45.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:26:46.064: INFO: namespace projected-8396 deletion completed in 6.114177546s

• [SLOW TEST:16.811 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:26:46.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:26:54.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3704" for this suite.
Dec 22 13:27:56.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:27:56.488: INFO: namespace kubelet-test-3704 deletion completed in 1m2.237271259s

• [SLOW TEST:70.424 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:27:56.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-0b083024-24ff-4a61-8d73-17cbb42f67ad
STEP: Creating a pod to test consume configMaps
Dec 22 13:27:56.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e" in namespace "configmap-2712" to be "success or failure"
Dec 22 13:27:56.687: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.716323ms
Dec 22 13:27:58.702: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02267891s
Dec 22 13:28:00.719: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039923463s
Dec 22 13:28:02.726: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046271758s
Dec 22 13:28:04.735: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056070653s
Dec 22 13:28:06.744: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064563161s
STEP: Saw pod success
Dec 22 13:28:06.744: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e" satisfied condition "success or failure"
Dec 22 13:28:06.757: INFO: Trying to get logs from node iruya-node pod pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e container configmap-volume-test: 
STEP: delete the pod
Dec 22 13:28:06.834: INFO: Waiting for pod pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e to disappear
Dec 22 13:28:06.845: INFO: Pod pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:28:06.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2712" for this suite.
Dec 22 13:28:13.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:28:13.600: INFO: namespace configmap-2712 deletion completed in 6.747139821s

• [SLOW TEST:17.111 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:28:13.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-e1925c1a-e987-4603-93b2-361cc61e1d17
STEP: Creating a pod to test consume secrets
Dec 22 13:28:13.740: INFO: Waiting up to 5m0s for pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70" in namespace "secrets-596" to be "success or failure"
Dec 22 13:28:13.745: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 5.737497ms
Dec 22 13:28:15.752: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012805267s
Dec 22 13:28:17.764: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024737183s
Dec 22 13:28:19.771: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031114131s
Dec 22 13:28:21.786: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045915336s
Dec 22 13:28:23.818: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07824658s
STEP: Saw pod success
Dec 22 13:28:23.818: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70" satisfied condition "success or failure"
Dec 22 13:28:23.834: INFO: Trying to get logs from node iruya-node pod pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:28:24.247: INFO: Waiting for pod pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70 to disappear
Dec 22 13:28:24.266: INFO: Pod pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:28:24.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-596" for this suite.
Dec 22 13:28:30.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:28:30.565: INFO: namespace secrets-596 deletion completed in 6.280307874s

• [SLOW TEST:16.965 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:28:30.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 13:28:30.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1364'
Dec 22 13:28:32.791: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 13:28:32.791: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 22 13:28:32.819: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 22 13:28:32.820: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 22 13:28:32.869: INFO: scanned /root for discovery docs: 
Dec 22 13:28:32.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1364'
Dec 22 13:28:55.251: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 22 13:28:55.251: INFO: stdout: "Created e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce\nScaling up e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 22 13:28:55.251: INFO: stdout: "Created e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce\nScaling up e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 22 13:28:55.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1364'
Dec 22 13:28:55.414: INFO: stderr: ""
Dec 22 13:28:55.414: INFO: stdout: "e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 "
Dec 22 13:28:55.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1364'
Dec 22 13:28:55.521: INFO: stderr: ""
Dec 22 13:28:55.522: INFO: stdout: "true"
Dec 22 13:28:55.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1364'
Dec 22 13:28:55.621: INFO: stderr: ""
Dec 22 13:28:55.621: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 22 13:28:55.621: INFO: e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 22 13:28:55.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1364'
Dec 22 13:28:55.755: INFO: stderr: ""
Dec 22 13:28:55.755: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:28:55.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1364" for this suite.
Dec 22 13:29:01.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:29:01.899: INFO: namespace kubectl-1364 deletion completed in 6.133640357s

• [SLOW TEST:31.333 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:29:01.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 22 13:29:12.613: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-74c7f779-870a-42c6-8ebf-e99cc6c0328d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 22 13:29:13.084: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-74c7f779-870a-42c6-8ebf-e99cc6c0328d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 22 13:29:13.517: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-74c7f779-870a-42c6-8ebf-e99cc6c0328d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:29:14.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-283" for this suite.
Dec 22 13:29:20.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:29:20.287: INFO: namespace svcaccounts-283 deletion completed in 6.220934125s

• [SLOW TEST:18.388 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:29:20.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 22 13:29:20.424: INFO: Waiting up to 5m0s for pod "pod-33eda857-620b-424b-9c71-7bceac705e1b" in namespace "emptydir-8225" to be "success or failure"
Dec 22 13:29:20.505: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 80.80909ms
Dec 22 13:29:22.516: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092080326s
Dec 22 13:29:24.531: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107164485s
Dec 22 13:29:26.549: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124472838s
Dec 22 13:29:28.564: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139956287s
Dec 22 13:29:30.577: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153268728s
Dec 22 13:29:32.624: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.200222897s
Dec 22 13:29:34.640: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.215928948s
STEP: Saw pod success
Dec 22 13:29:34.640: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b" satisfied condition "success or failure"
Dec 22 13:29:34.648: INFO: Trying to get logs from node iruya-node pod pod-33eda857-620b-424b-9c71-7bceac705e1b container test-container: 
STEP: delete the pod
Dec 22 13:29:34.825: INFO: Waiting for pod pod-33eda857-620b-424b-9c71-7bceac705e1b to disappear
Dec 22 13:29:34.881: INFO: Pod pod-33eda857-620b-424b-9c71-7bceac705e1b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:29:34.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8225" for this suite.
Dec 22 13:29:40.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:29:41.060: INFO: namespace emptydir-8225 deletion completed in 6.161737534s

• [SLOW TEST:20.772 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:29:41.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5743/secret-test-fd2dbb22-dd43-4546-b3b2-ae4118054d61
STEP: Creating a pod to test consume secrets
Dec 22 13:29:41.184: INFO: Waiting up to 5m0s for pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab" in namespace "secrets-5743" to be "success or failure"
Dec 22 13:29:41.193: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.247909ms
Dec 22 13:29:43.200: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016420651s
Dec 22 13:29:45.243: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058909328s
Dec 22 13:29:47.253: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069372195s
Dec 22 13:29:49.263: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078888968s
Dec 22 13:29:51.273: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089232913s
STEP: Saw pod success
Dec 22 13:29:51.273: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab" satisfied condition "success or failure"
Dec 22 13:29:51.279: INFO: Trying to get logs from node iruya-node pod pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab container env-test: 
STEP: delete the pod
Dec 22 13:29:51.376: INFO: Waiting for pod pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab to disappear
Dec 22 13:29:51.382: INFO: Pod pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:29:51.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5743" for this suite.
Dec 22 13:29:57.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:29:57.573: INFO: namespace secrets-5743 deletion completed in 6.183122857s

• [SLOW TEST:16.513 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:29:57.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 22 13:29:57.637: INFO: PodSpec: initContainers in spec.initContainers
Dec 22 13:30:58.873: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e2d3ba29-e5eb-497c-bb5e-91adafbc8fb1", GenerateName:"", Namespace:"init-container-381", SelfLink:"/api/v1/namespaces/init-container-381/pods/pod-init-e2d3ba29-e5eb-497c-bb5e-91adafbc8fb1", UID:"0022552a-0c59-402e-b2f1-1246af32311a", ResourceVersion:"17641240", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"637705430"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n7ghm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0000b40c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n7ghm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n7ghm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n7ghm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c760e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00206c060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c761f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c76250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001c76258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001c7625c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0024aa080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000df20e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000df21c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4f45cba8adb7ca3d43f6bca355cd230fa67d75d949898d0a54d709afafda96c7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024aa0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024aa0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:30:58.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-381" for this suite.
Dec 22 13:31:21.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:31:21.159: INFO: namespace init-container-381 deletion completed in 22.169751755s

• [SLOW TEST:83.586 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:31:21.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 22 13:31:21.306: INFO: Waiting up to 5m0s for pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424" in namespace "emptydir-2444" to be "success or failure"
Dec 22 13:31:21.316: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 9.607183ms
Dec 22 13:31:23.322: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015694903s
Dec 22 13:31:25.326: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019562353s
Dec 22 13:31:27.336: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029137787s
Dec 22 13:31:29.342: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035788636s
Dec 22 13:31:31.918: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.611550918s
STEP: Saw pod success
Dec 22 13:31:31.918: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424" satisfied condition "success or failure"
Dec 22 13:31:31.931: INFO: Trying to get logs from node iruya-node pod pod-a9298732-da6f-4fbb-be5b-5aeadb00e424 container test-container: 
STEP: delete the pod
Dec 22 13:31:32.063: INFO: Waiting for pod pod-a9298732-da6f-4fbb-be5b-5aeadb00e424 to disappear
Dec 22 13:31:32.080: INFO: Pod pod-a9298732-da6f-4fbb-be5b-5aeadb00e424 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:31:32.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2444" for this suite.
Dec 22 13:31:38.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:31:38.175: INFO: namespace emptydir-2444 deletion completed in 6.088173035s

• [SLOW TEST:17.015 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:31:38.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 in namespace container-probe-4175
Dec 22 13:31:46.346: INFO: Started pod liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 in namespace container-probe-4175
STEP: checking the pod's current state and verifying that restartCount is present
Dec 22 13:31:46.350: INFO: Initial restart count of pod liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 is 0
Dec 22 13:32:12.833: INFO: Restart count of pod container-probe-4175/liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 is now 1 (26.482763147s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:32:12.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4175" for this suite.
Dec 22 13:32:18.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:32:19.058: INFO: namespace container-probe-4175 deletion completed in 6.182223221s

• [SLOW TEST:40.883 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:32:19.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:32:19.212: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.470617ms)
Dec 22 13:32:19.221: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.647646ms)
Dec 22 13:32:19.232: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.985047ms)
Dec 22 13:32:19.241: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.12011ms)
Dec 22 13:32:19.252: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.070775ms)
Dec 22 13:32:19.263: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.41267ms)
Dec 22 13:32:19.279: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.689526ms)
Dec 22 13:32:19.317: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.404095ms)
Dec 22 13:32:19.329: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.817215ms)
Dec 22 13:32:19.348: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.498405ms)
Dec 22 13:32:19.356: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.274981ms)
Dec 22 13:32:19.366: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.998499ms)
Dec 22 13:32:19.382: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.740665ms)
Dec 22 13:32:19.392: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.562066ms)
Dec 22 13:32:19.401: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.165233ms)
Dec 22 13:32:19.409: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.362489ms)
Dec 22 13:32:19.420: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.227802ms)
Dec 22 13:32:19.436: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.585568ms)
Dec 22 13:32:19.450: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.183676ms)
Dec 22 13:32:19.465: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.812616ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:32:19.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6599" for this suite.
Dec 22 13:32:25.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:32:25.675: INFO: namespace proxy-6599 deletion completed in 6.204813481s

• [SLOW TEST:6.617 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:32:25.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:32:25.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93" in namespace "projected-2348" to be "success or failure"
Dec 22 13:32:25.879: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629035ms
Dec 22 13:32:27.920: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049560922s
Dec 22 13:32:30.536: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665607961s
Dec 22 13:32:32.555: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.684038057s
Dec 22 13:32:34.564: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.693485364s
Dec 22 13:32:36.579: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.707820533s
STEP: Saw pod success
Dec 22 13:32:36.579: INFO: Pod "downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93" satisfied condition "success or failure"
Dec 22 13:32:36.588: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93 container client-container: 
STEP: delete the pod
Dec 22 13:32:36.830: INFO: Waiting for pod downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93 to disappear
Dec 22 13:32:36.846: INFO: Pod downwardapi-volume-e1aeacf5-3e08-4694-b6ef-9d1405565e93 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:32:36.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2348" for this suite.
Dec 22 13:32:42.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:32:43.024: INFO: namespace projected-2348 deletion completed in 6.167432997s

• [SLOW TEST:17.349 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:32:43.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-7342
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7342 to expose endpoints map[]
Dec 22 13:32:43.232: INFO: Get endpoints failed (8.253191ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 22 13:32:44.242: INFO: successfully validated that service multi-endpoint-test in namespace services-7342 exposes endpoints map[] (1.017558887s elapsed)
STEP: Creating pod pod1 in namespace services-7342
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7342 to expose endpoints map[pod1:[100]]
Dec 22 13:32:48.356: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.100525571s elapsed, will retry)
Dec 22 13:32:51.391: INFO: successfully validated that service multi-endpoint-test in namespace services-7342 exposes endpoints map[pod1:[100]] (7.135782936s elapsed)
STEP: Creating pod pod2 in namespace services-7342
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7342 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 22 13:32:57.345: INFO: Unexpected endpoints: found map[208c2f35-ba8b-4dc7-a026-51fb0d6dcc16:[100]], expected map[pod1:[100] pod2:[101]] (5.937738604s elapsed, will retry)
Dec 22 13:32:59.498: INFO: successfully validated that service multi-endpoint-test in namespace services-7342 exposes endpoints map[pod1:[100] pod2:[101]] (8.091058216s elapsed)
STEP: Deleting pod pod1 in namespace services-7342
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7342 to expose endpoints map[pod2:[101]]
Dec 22 13:32:59.574: INFO: successfully validated that service multi-endpoint-test in namespace services-7342 exposes endpoints map[pod2:[101]] (46.405429ms elapsed)
STEP: Deleting pod pod2 in namespace services-7342
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7342 to expose endpoints map[]
Dec 22 13:32:59.641: INFO: successfully validated that service multi-endpoint-test in namespace services-7342 exposes endpoints map[] (46.652249ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:32:59.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7342" for this suite.
Dec 22 13:33:21.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:33:21.833: INFO: namespace services-7342 deletion completed in 22.135559707s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.809 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:33:21.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 22 13:33:21.890: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 22 13:33:21.942: INFO: Waiting for terminating namespaces to be deleted...
Dec 22 13:33:21.949: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 22 13:33:21.965: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.965: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 13:33:21.965: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 22 13:33:21.965: INFO: 	Container weave ready: true, restart count 0
Dec 22 13:33:21.965: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 13:33:21.965: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 22 13:33:21.976: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 22 13:33:21.976: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 13:33:21.976: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 22 13:33:21.976: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 22 13:33:21.976: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container coredns ready: true, restart count 0
Dec 22 13:33:21.976: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container etcd ready: true, restart count 0
Dec 22 13:33:21.976: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container weave ready: true, restart count 0
Dec 22 13:33:21.976: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 13:33:21.976: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 22 13:33:21.976: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 22 13:33:22.172: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 22 13:33:22.172: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7.15e2b4cab40310d0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8538/filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7.15e2b4cbdfa16438], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7.15e2b4ccb1c040b4], Reason = [Created], Message = [Created container filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7.15e2b4ccd996f095], Reason = [Started], Message = [Started container filler-pod-66cd25ad-5fad-4f07-b69f-f43cda4569d7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316.15e2b4cab592348f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8538/filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316.15e2b4cbe9f2673d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316.15e2b4cccb6e579b], Reason = [Created], Message = [Created container filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316.15e2b4cce90df8c0], Reason = [Started], Message = [Started container filler-pod-b3c5c3f3-6d58-4f12-a971-ea3219c9a316]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e2b4cd83c3c6f1], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:33:35.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8538" for this suite.
Dec 22 13:33:45.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:33:46.477: INFO: namespace sched-pred-8538 deletion completed in 10.865019762s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:24.643 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:33:46.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1222 13:34:27.023515       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 13:34:27.023: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:34:27.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3377" for this suite.
Dec 22 13:34:41.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:34:44.943: INFO: namespace gc-3377 deletion completed in 17.914882181s

• [SLOW TEST:58.465 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:34:44.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 13:34:45.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7840'
Dec 22 13:34:46.118: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 13:34:46.118: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 22 13:34:46.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7840'
Dec 22 13:34:46.298: INFO: stderr: ""
Dec 22 13:34:46.298: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:34:46.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7840" for this suite.
Dec 22 13:35:08.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:35:08.502: INFO: namespace kubectl-7840 deletion completed in 22.197501922s

• [SLOW TEST:23.558 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:35:08.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-78ce9c06-7be5-4eb1-b537-1f9404cfae2e
STEP: Creating a pod to test consume secrets
Dec 22 13:35:08.805: INFO: Waiting up to 5m0s for pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000" in namespace "secrets-5531" to be "success or failure"
Dec 22 13:35:08.824: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000": Phase="Pending", Reason="", readiness=false. Elapsed: 19.006185ms
Dec 22 13:35:10.837: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031572421s
Dec 22 13:35:12.913: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107927828s
Dec 22 13:35:14.944: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138739235s
Dec 22 13:35:17.412: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607196269s
Dec 22 13:35:19.420: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.614970093s
STEP: Saw pod success
Dec 22 13:35:19.420: INFO: Pod "pod-secrets-2789e876-78cb-48da-b204-f122245ec000" satisfied condition "success or failure"
Dec 22 13:35:19.425: INFO: Trying to get logs from node iruya-node pod pod-secrets-2789e876-78cb-48da-b204-f122245ec000 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:35:19.546: INFO: Waiting for pod pod-secrets-2789e876-78cb-48da-b204-f122245ec000 to disappear
Dec 22 13:35:19.553: INFO: Pod pod-secrets-2789e876-78cb-48da-b204-f122245ec000 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:35:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5531" for this suite.
Dec 22 13:35:25.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:35:25.721: INFO: namespace secrets-5531 deletion completed in 6.162386367s

• [SLOW TEST:17.218 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:35:25.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6897/configmap-test-3910069c-0e5e-4a76-b9ee-3babc0bc21a0
STEP: Creating a pod to test consume configMaps
Dec 22 13:35:25.861: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4" in namespace "configmap-6897" to be "success or failure"
Dec 22 13:35:25.871: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.282191ms
Dec 22 13:35:27.876: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014617132s
Dec 22 13:35:29.905: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043421326s
Dec 22 13:35:31.913: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051463626s
Dec 22 13:35:33.924: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062194581s
Dec 22 13:35:35.931: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069414931s
STEP: Saw pod success
Dec 22 13:35:35.931: INFO: Pod "pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4" satisfied condition "success or failure"
Dec 22 13:35:35.934: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4 container env-test: 
STEP: delete the pod
Dec 22 13:35:35.979: INFO: Waiting for pod pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4 to disappear
Dec 22 13:35:36.012: INFO: Pod pod-configmaps-9a3a5fa5-290f-4efa-9fc9-232b3908def4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:35:36.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6897" for this suite.
Dec 22 13:35:42.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:35:42.227: INFO: namespace configmap-6897 deletion completed in 6.176757719s

• [SLOW TEST:16.506 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:35:42.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:35:42.417: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 22 13:35:47.426: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 22 13:35:51.472: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 22 13:35:51.534: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2816,SelfLink:/apis/apps/v1/namespaces/deployment-2816/deployments/test-cleanup-deployment,UID:a4e6a984-e235-4011-9361-1cce4d250cc2,ResourceVersion:17642099,Generation:1,CreationTimestamp:2019-12-22 13:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 22 13:35:51.559: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2816,SelfLink:/apis/apps/v1/namespaces/deployment-2816/replicasets/test-cleanup-deployment-55bbcbc84c,UID:827d82ed-a8f2-4efa-bd52-ff0ce40c5846,ResourceVersion:17642101,Generation:1,CreationTimestamp:2019-12-22 13:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a4e6a984-e235-4011-9361-1cce4d250cc2 0xc000aa6dd7 0xc000aa6dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:35:51.559: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 22 13:35:51.559: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2816,SelfLink:/apis/apps/v1/namespaces/deployment-2816/replicasets/test-cleanup-controller,UID:62a68be3-b22b-412d-8c50-bf8750d47d73,ResourceVersion:17642100,Generation:1,CreationTimestamp:2019-12-22 13:35:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a4e6a984-e235-4011-9361-1cce4d250cc2 0xc000aa6cf7 0xc000aa6cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 22 13:35:51.714: INFO: Pod "test-cleanup-controller-6lr2z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-6lr2z,GenerateName:test-cleanup-controller-,Namespace:deployment-2816,SelfLink:/api/v1/namespaces/deployment-2816/pods/test-cleanup-controller-6lr2z,UID:72771a6f-8bc4-4b6e-a0e5-7baccb6decab,ResourceVersion:17642095,Generation:0,CreationTimestamp:2019-12-22 13:35:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 62a68be3-b22b-412d-8c50-bf8750d47d73 0xc000aa7987 0xc000aa7988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lc9w4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lc9w4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lc9w4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000aa7aa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000aa7ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:35:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:35:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:35:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:35:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-22 13:35:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 13:35:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9552b1d39384ea3c032b4e80f5d6570380e8147d0e4685ece2b76e63e5d18407}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 13:35:51.714: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vk9rk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vk9rk,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2816,SelfLink:/api/v1/namespaces/deployment-2816/pods/test-cleanup-deployment-55bbcbc84c-vk9rk,UID:5786b691-0b6f-475f-942c-ffae42e6c8c7,ResourceVersion:17642107,Generation:0,CreationTimestamp:2019-12-22 13:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 827d82ed-a8f2-4efa-bd52-ff0ce40c5846 0xc000aa7cb7 0xc000aa7cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lc9w4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lc9w4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-lc9w4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000aa7da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000aa7dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:35:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:35:51.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2816" for this suite.
Dec 22 13:35:59.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:36:00.050: INFO: namespace deployment-2816 deletion completed in 8.325797282s

• [SLOW TEST:17.822 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:36:00.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 22 13:36:00.204: INFO: Waiting up to 5m0s for pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76" in namespace "containers-4481" to be "success or failure"
Dec 22 13:36:00.219: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Pending", Reason="", readiness=false. Elapsed: 15.453716ms
Dec 22 13:36:02.231: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026843245s
Dec 22 13:36:04.238: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034403345s
Dec 22 13:36:06.402: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198401592s
Dec 22 13:36:08.416: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212256879s
Dec 22 13:36:10.427: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222892885s
Dec 22 13:36:12.435: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.231320804s
STEP: Saw pod success
Dec 22 13:36:12.435: INFO: Pod "client-containers-44439ebd-d999-45de-8ea1-f485483abd76" satisfied condition "success or failure"
Dec 22 13:36:12.441: INFO: Trying to get logs from node iruya-node pod client-containers-44439ebd-d999-45de-8ea1-f485483abd76 container test-container: 
STEP: delete the pod
Dec 22 13:36:12.508: INFO: Waiting for pod client-containers-44439ebd-d999-45de-8ea1-f485483abd76 to disappear
Dec 22 13:36:12.523: INFO: Pod client-containers-44439ebd-d999-45de-8ea1-f485483abd76 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:36:12.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4481" for this suite.
Dec 22 13:36:18.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:36:18.706: INFO: namespace containers-4481 deletion completed in 6.170679487s

• [SLOW TEST:18.656 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:36:18.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:36:18.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977" in namespace "projected-3415" to be "success or failure"
Dec 22 13:36:19.007: INFO: Pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977": Phase="Pending", Reason="", readiness=false. Elapsed: 45.235924ms
Dec 22 13:36:21.016: INFO: Pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054147199s
Dec 22 13:36:23.058: INFO: Pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096462968s
Dec 22 13:36:25.362: INFO: Pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40084878s
Dec 22 13:36:27.374: INFO: Pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.412246507s
STEP: Saw pod success
Dec 22 13:36:27.374: INFO: Pod "downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977" satisfied condition "success or failure"
Dec 22 13:36:27.379: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977 container client-container: 
STEP: delete the pod
Dec 22 13:36:27.469: INFO: Waiting for pod downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977 to disappear
Dec 22 13:36:27.540: INFO: Pod downwardapi-volume-e2737a8b-1e76-4641-a3ae-6aa7ba775977 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:36:27.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3415" for this suite.
Dec 22 13:36:33.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:36:33.828: INFO: namespace projected-3415 deletion completed in 6.280881705s

• [SLOW TEST:15.121 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:36:33.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 22 13:36:43.287: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:36:43.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6373" for this suite.
Dec 22 13:36:49.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:36:49.512: INFO: namespace container-runtime-6373 deletion completed in 6.185561363s

• [SLOW TEST:15.684 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:36:49.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1222 13:36:59.721433       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 13:36:59.721: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:36:59.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4041" for this suite.
Dec 22 13:37:05.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:37:05.922: INFO: namespace gc-4041 deletion completed in 6.194537481s

• [SLOW TEST:16.409 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:37:05.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 22 13:37:06.075: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:37:20.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7328" for this suite.
Dec 22 13:37:26.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:37:26.697: INFO: namespace init-container-7328 deletion completed in 6.165978619s

• [SLOW TEST:20.775 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:37:26.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-2d3a7651-7ff0-4167-9709-87756685fa5b
STEP: Creating secret with name secret-projected-all-test-volume-69180db5-8d5a-4baf-a27f-17d529d03a71
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 22 13:37:26.892: INFO: Waiting up to 5m0s for pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40" in namespace "projected-4949" to be "success or failure"
Dec 22 13:37:26.901: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40": Phase="Pending", Reason="", readiness=false. Elapsed: 9.538029ms
Dec 22 13:37:28.909: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017306903s
Dec 22 13:37:30.931: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039402815s
Dec 22 13:37:32.950: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058410212s
Dec 22 13:37:34.957: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40": Phase="Running", Reason="", readiness=true. Elapsed: 8.065007725s
Dec 22 13:37:36.967: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074778495s
STEP: Saw pod success
Dec 22 13:37:36.967: INFO: Pod "projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40" satisfied condition "success or failure"
Dec 22 13:37:36.971: INFO: Trying to get logs from node iruya-node pod projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40 container projected-all-volume-test: 
STEP: delete the pod
Dec 22 13:37:37.083: INFO: Waiting for pod projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40 to disappear
Dec 22 13:37:37.089: INFO: Pod projected-volume-b48e500a-7285-48db-bed2-89ba650dbd40 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:37:37.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4949" for this suite.
Dec 22 13:37:43.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:37:43.244: INFO: namespace projected-4949 deletion completed in 6.147310282s

• [SLOW TEST:16.545 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:37:43.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 22 13:37:43.409: INFO: Waiting up to 5m0s for pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0" in namespace "var-expansion-9597" to be "success or failure"
Dec 22 13:37:43.474: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0": Phase="Pending", Reason="", readiness=false. Elapsed: 65.463623ms
Dec 22 13:37:45.484: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075034249s
Dec 22 13:37:47.491: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081660119s
Dec 22 13:37:49.499: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089630807s
Dec 22 13:37:51.504: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09453287s
Dec 22 13:37:53.521: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111792657s
STEP: Saw pod success
Dec 22 13:37:53.521: INFO: Pod "var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0" satisfied condition "success or failure"
Dec 22 13:37:53.525: INFO: Trying to get logs from node iruya-node pod var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0 container dapi-container: 
STEP: delete the pod
Dec 22 13:37:53.604: INFO: Waiting for pod var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0 to disappear
Dec 22 13:37:53.610: INFO: Pod var-expansion-cb0a93a2-7948-41b9-a3a9-587f0a1e9da0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:37:53.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9597" for this suite.
Dec 22 13:37:59.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:37:59.780: INFO: namespace var-expansion-9597 deletion completed in 6.164748504s

• [SLOW TEST:16.536 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:37:59.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-a86df1a8-b6ad-4cef-9f98-820bb0381ddc
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:37:59.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9487" for this suite.
Dec 22 13:38:05.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:38:06.072: INFO: namespace configmap-9487 deletion completed in 6.134963489s

• [SLOW TEST:6.292 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:38:06.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:38:06.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035" in namespace "downward-api-3192" to be "success or failure"
Dec 22 13:38:06.261: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Pending", Reason="", readiness=false. Elapsed: 104.917217ms
Dec 22 13:38:08.279: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122642073s
Dec 22 13:38:10.316: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160405059s
Dec 22 13:38:12.322: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166462202s
Dec 22 13:38:15.183: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Pending", Reason="", readiness=false. Elapsed: 9.027236202s
Dec 22 13:38:17.190: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Pending", Reason="", readiness=false. Elapsed: 11.034190204s
Dec 22 13:38:19.203: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.047376292s
STEP: Saw pod success
Dec 22 13:38:19.203: INFO: Pod "downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035" satisfied condition "success or failure"
Dec 22 13:38:19.209: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035 container client-container: 
STEP: delete the pod
Dec 22 13:38:19.263: INFO: Waiting for pod downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035 to disappear
Dec 22 13:38:19.354: INFO: Pod downwardapi-volume-b54ed90c-fb1f-4764-983b-7826022e5035 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:38:19.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3192" for this suite.
Dec 22 13:38:25.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:38:25.625: INFO: namespace downward-api-3192 deletion completed in 6.261838405s

• [SLOW TEST:19.552 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:38:25.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 22 13:38:25.740: INFO: Waiting up to 5m0s for pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e" in namespace "downward-api-7661" to be "success or failure"
Dec 22 13:38:25.763: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.820634ms
Dec 22 13:38:27.771: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030424857s
Dec 22 13:38:29.812: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072096502s
Dec 22 13:38:31.825: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085127994s
Dec 22 13:38:33.838: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097984397s
Dec 22 13:38:35.845: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104617675s
STEP: Saw pod success
Dec 22 13:38:35.845: INFO: Pod "downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e" satisfied condition "success or failure"
Dec 22 13:38:35.848: INFO: Trying to get logs from node iruya-node pod downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e container dapi-container: 
STEP: delete the pod
Dec 22 13:38:35.989: INFO: Waiting for pod downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e to disappear
Dec 22 13:38:35.996: INFO: Pod downward-api-6fb7b847-4a7d-425e-a150-0c9cdf69560e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:38:35.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7661" for this suite.
Dec 22 13:38:42.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:38:42.215: INFO: namespace downward-api-7661 deletion completed in 6.2122042s

• [SLOW TEST:16.590 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:38:42.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 22 13:38:42.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1809'
Dec 22 13:38:44.559: INFO: stderr: ""
Dec 22 13:38:44.560: INFO: stdout: "pod/pause created\n"
Dec 22 13:38:44.560: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 22 13:38:44.560: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1809" to be "running and ready"
Dec 22 13:38:44.567: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.389237ms
Dec 22 13:38:46.592: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032580317s
Dec 22 13:38:48.600: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040592059s
Dec 22 13:38:50.621: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061007253s
Dec 22 13:38:52.628: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.068139798s
Dec 22 13:38:52.628: INFO: Pod "pause" satisfied condition "running and ready"
Dec 22 13:38:52.628: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 22 13:38:52.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1809'
Dec 22 13:38:52.785: INFO: stderr: ""
Dec 22 13:38:52.785: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 22 13:38:52.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1809'
Dec 22 13:38:52.911: INFO: stderr: ""
Dec 22 13:38:52.911: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 22 13:38:52.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1809'
Dec 22 13:38:53.038: INFO: stderr: ""
Dec 22 13:38:53.038: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 22 13:38:53.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1809'
Dec 22 13:38:53.164: INFO: stderr: ""
Dec 22 13:38:53.164: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 22 13:38:53.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1809'
Dec 22 13:38:53.289: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:38:53.289: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 22 13:38:53.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1809'
Dec 22 13:38:53.421: INFO: stderr: "No resources found.\n"
Dec 22 13:38:53.421: INFO: stdout: ""
Dec 22 13:38:53.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1809 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 22 13:38:53.514: INFO: stderr: ""
Dec 22 13:38:53.514: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:38:53.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1809" for this suite.
Dec 22 13:38:59.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:38:59.647: INFO: namespace kubectl-1809 deletion completed in 6.126863063s

• [SLOW TEST:17.431 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:38:59.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 22 13:39:10.179: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:39:10.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8233" for this suite.
Dec 22 13:39:16.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:39:16.823: INFO: namespace container-runtime-8233 deletion completed in 6.585629418s

• [SLOW TEST:17.176 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:39:16.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 22 13:39:16.909: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:39:33.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6839" for this suite.
Dec 22 13:39:39.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:39:39.277: INFO: namespace pods-6839 deletion completed in 6.12951962s

• [SLOW TEST:22.455 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:39:39.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 22 13:39:59.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 13:39:59.500: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 13:40:01.500: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 13:40:01.518: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 13:40:03.500: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 13:40:03.522: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 13:40:05.500: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 13:40:05.507: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:40:05.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5337" for this suite.
Dec 22 13:40:29.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:40:29.709: INFO: namespace container-lifecycle-hook-5337 deletion completed in 24.161499887s

• [SLOW TEST:50.431 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:40:29.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:41:25.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5050" for this suite.
Dec 22 13:41:31.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:41:31.655: INFO: namespace container-runtime-5050 deletion completed in 6.195153059s

• [SLOW TEST:61.946 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:41:31.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:41:43.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-294" for this suite.
Dec 22 13:41:49.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:41:50.008: INFO: namespace kubelet-test-294 deletion completed in 6.153626205s

• [SLOW TEST:18.352 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:41:50.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2599.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2599.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2599.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2599.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2599.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2599.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 13:42:04.248: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16: the server could not find the requested resource (get pods dns-test-84255a6b-424a-4a56-9289-c62380f04f16)
Dec 22 13:42:04.252: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16: the server could not find the requested resource (get pods dns-test-84255a6b-424a-4a56-9289-c62380f04f16)
Dec 22 13:42:04.255: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2599.svc.cluster.local from pod dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16: the server could not find the requested resource (get pods dns-test-84255a6b-424a-4a56-9289-c62380f04f16)
Dec 22 13:42:04.258: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16: the server could not find the requested resource (get pods dns-test-84255a6b-424a-4a56-9289-c62380f04f16)
Dec 22 13:42:04.261: INFO: Unable to read jessie_udp@PodARecord from pod dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16: the server could not find the requested resource (get pods dns-test-84255a6b-424a-4a56-9289-c62380f04f16)
Dec 22 13:42:04.271: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16: the server could not find the requested resource (get pods dns-test-84255a6b-424a-4a56-9289-c62380f04f16)
Dec 22 13:42:04.271: INFO: Lookups using dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2599.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 22 13:42:09.399: INFO: DNS probes using dns-2599/dns-test-84255a6b-424a-4a56-9289-c62380f04f16 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:42:09.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2599" for this suite.
Dec 22 13:42:15.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:42:15.750: INFO: namespace dns-2599 deletion completed in 6.205701524s

• [SLOW TEST:25.742 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:42:15.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-d77f00e1-3212-4d01-9cdc-3b8c8b0ce0a3
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-d77f00e1-3212-4d01-9cdc-3b8c8b0ce0a3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:42:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7647" for this suite.
Dec 22 13:42:48.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:42:48.157: INFO: namespace configmap-7647 deletion completed in 22.103535231s

• [SLOW TEST:32.407 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:42:48.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-70239d2e-d8ba-4c21-9ac7-19bcec851630
STEP: Creating a pod to test consume configMaps
Dec 22 13:42:48.345: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8" in namespace "projected-3603" to be "success or failure"
Dec 22 13:42:48.366: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.084849ms
Dec 22 13:42:50.378: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032276429s
Dec 22 13:42:52.383: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037183531s
Dec 22 13:42:54.396: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050611885s
Dec 22 13:42:56.406: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060823121s
Dec 22 13:42:58.414: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068773509s
STEP: Saw pod success
Dec 22 13:42:58.414: INFO: Pod "pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8" satisfied condition "success or failure"
Dec 22 13:42:58.418: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 13:42:58.537: INFO: Waiting for pod pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8 to disappear
Dec 22 13:42:58.547: INFO: Pod pod-projected-configmaps-54375d02-44b6-4e08-b136-9ebc72b9a6a8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:42:58.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3603" for this suite.
Dec 22 13:43:04.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:43:04.708: INFO: namespace projected-3603 deletion completed in 6.153850008s

• [SLOW TEST:16.551 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:43:04.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8a72cbdb-46aa-409a-baff-c120206cc1fe
STEP: Creating a pod to test consume configMaps
Dec 22 13:43:04.855: INFO: Waiting up to 5m0s for pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb" in namespace "configmap-5354" to be "success or failure"
Dec 22 13:43:04.864: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.086218ms
Dec 22 13:43:06.884: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028873295s
Dec 22 13:43:08.917: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06258676s
Dec 22 13:43:11.174: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319282368s
Dec 22 13:43:13.184: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329272992s
Dec 22 13:43:15.192: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.337187761s
STEP: Saw pod success
Dec 22 13:43:15.192: INFO: Pod "pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb" satisfied condition "success or failure"
Dec 22 13:43:15.197: INFO: Trying to get logs from node iruya-node pod pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb container configmap-volume-test: 
STEP: delete the pod
Dec 22 13:43:15.363: INFO: Waiting for pod pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb to disappear
Dec 22 13:43:15.384: INFO: Pod pod-configmaps-563bd5a7-f9ad-4b1d-be1d-f6fadd2244cb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:43:15.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5354" for this suite.
Dec 22 13:43:21.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:43:21.557: INFO: namespace configmap-5354 deletion completed in 6.167488943s

• [SLOW TEST:16.849 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:43:21.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-bd4d39b0-403b-4450-9aaa-03361346ce2c
STEP: Creating a pod to test consume secrets
Dec 22 13:43:21.686: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47" in namespace "projected-1101" to be "success or failure"
Dec 22 13:43:21.697: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47": Phase="Pending", Reason="", readiness=false. Elapsed: 11.18678ms
Dec 22 13:43:23.706: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020049216s
Dec 22 13:43:25.717: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030793041s
Dec 22 13:43:27.725: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038946011s
Dec 22 13:43:29.736: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050543353s
Dec 22 13:43:31.745: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059702245s
STEP: Saw pod success
Dec 22 13:43:31.746: INFO: Pod "pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47" satisfied condition "success or failure"
Dec 22 13:43:31.751: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 13:43:31.896: INFO: Waiting for pod pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47 to disappear
Dec 22 13:43:31.929: INFO: Pod pod-projected-secrets-bc89106d-389f-4f46-a02d-43f336f26f47 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:43:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1101" for this suite.
Dec 22 13:43:37.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:43:38.068: INFO: namespace projected-1101 deletion completed in 6.132333511s

• [SLOW TEST:16.510 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:43:38.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:44:06.339: INFO: Container started at 2019-12-22 13:43:47 +0000 UTC, pod became ready at 2019-12-22 13:44:05 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:44:06.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5325" for this suite.
Dec 22 13:44:28.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:44:28.509: INFO: namespace container-probe-5325 deletion completed in 22.159595648s

• [SLOW TEST:50.441 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:44:28.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5493
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 22 13:44:28.593: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 22 13:45:06.834: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5493 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 13:45:06.834: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 13:45:08.200: INFO: Found all expected endpoints: [netserver-0]
Dec 22 13:45:08.208: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5493 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 13:45:08.208: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 13:45:09.537: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:45:09.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5493" for this suite.
Dec 22 13:45:31.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:45:31.770: INFO: namespace pod-network-test-5493 deletion completed in 22.21645508s

• [SLOW TEST:63.261 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:45:31.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 22 13:45:43.062: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:45:44.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9099" for this suite.
Dec 22 13:48:40.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:48:40.269: INFO: namespace replicaset-9099 deletion completed in 2m56.163824334s

• [SLOW TEST:188.498 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:48:40.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-594
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 22 13:48:40.338: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 22 13:49:18.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-594 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 13:49:18.717: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 13:49:19.211: INFO: Found all expected endpoints: [netserver-0]
Dec 22 13:49:19.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-594 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 13:49:19.219: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 13:49:19.588: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:49:19.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-594" for this suite.
Dec 22 13:49:43.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:49:43.779: INFO: namespace pod-network-test-594 deletion completed in 24.182393644s

• [SLOW TEST:63.510 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:49:43.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 22 13:50:04.072: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:04.082: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:06.082: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:06.090: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:08.082: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:08.091: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:10.082: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:10.093: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:12.082: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:12.089: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:14.083: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:14.093: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:16.082: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:16.091: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 13:50:18.082: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 13:50:18.089: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:50:18.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2453" for this suite.
Dec 22 13:50:40.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:50:40.269: INFO: namespace container-lifecycle-hook-2453 deletion completed in 22.175434234s

• [SLOW TEST:56.489 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:50:40.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1052, will wait for the garbage collector to delete the pods
Dec 22 13:50:54.472: INFO: Deleting Job.batch foo took: 18.853132ms
Dec 22 13:50:54.772: INFO: Terminating Job.batch foo pods took: 300.355215ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:51:36.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1052" for this suite.
Dec 22 13:51:42.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:51:42.788: INFO: namespace job-1052 deletion completed in 6.20386786s

• [SLOW TEST:62.518 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:51:42.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-6e58addf-cef3-45d2-98f9-2565eb9e0446
STEP: Creating a pod to test consume configMaps
Dec 22 13:51:42.971: INFO: Waiting up to 5m0s for pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99" in namespace "configmap-252" to be "success or failure"
Dec 22 13:51:42.979: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063391ms
Dec 22 13:51:44.986: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014788509s
Dec 22 13:51:46.995: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024315511s
Dec 22 13:51:49.007: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035709738s
Dec 22 13:51:51.014: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99": Phase="Running", Reason="", readiness=true. Elapsed: 8.042712324s
Dec 22 13:51:53.023: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051905292s
STEP: Saw pod success
Dec 22 13:51:53.023: INFO: Pod "pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99" satisfied condition "success or failure"
Dec 22 13:51:53.029: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99 container configmap-volume-test: 
STEP: delete the pod
Dec 22 13:51:53.156: INFO: Waiting for pod pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99 to disappear
Dec 22 13:51:53.163: INFO: Pod pod-configmaps-b95844ec-f50f-4e82-a1a5-1df0826bdb99 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:51:53.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-252" for this suite.
Dec 22 13:51:59.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:51:59.315: INFO: namespace configmap-252 deletion completed in 6.146973349s

• [SLOW TEST:16.527 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:51:59.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:51:59.477: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6" in namespace "downward-api-3001" to be "success or failure"
Dec 22 13:51:59.518: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 41.153043ms
Dec 22 13:52:01.526: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049415887s
Dec 22 13:52:03.538: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060955456s
Dec 22 13:52:05.546: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069546476s
Dec 22 13:52:07.554: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076969158s
Dec 22 13:52:09.563: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086039947s
STEP: Saw pod success
Dec 22 13:52:09.563: INFO: Pod "downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6" satisfied condition "success or failure"
Dec 22 13:52:09.568: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6 container client-container: 
STEP: delete the pod
Dec 22 13:52:09.674: INFO: Waiting for pod downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6 to disappear
Dec 22 13:52:09.683: INFO: Pod downwardapi-volume-31291f2a-e09f-46b5-9172-b7a90c00b5d6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:52:09.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3001" for this suite.
Dec 22 13:52:15.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:52:16.068: INFO: namespace downward-api-3001 deletion completed in 6.375369752s

• [SLOW TEST:16.752 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:52:16.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:52:16.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c" in namespace "projected-4625" to be "success or failure"
Dec 22 13:52:16.243: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799134ms
Dec 22 13:52:18.254: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017921189s
Dec 22 13:52:20.271: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034502613s
Dec 22 13:52:22.279: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042108266s
Dec 22 13:52:24.376: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c": Phase="Running", Reason="", readiness=true. Elapsed: 8.139812048s
Dec 22 13:52:26.385: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148914834s
STEP: Saw pod success
Dec 22 13:52:26.386: INFO: Pod "downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c" satisfied condition "success or failure"
Dec 22 13:52:26.391: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c container client-container: 
STEP: delete the pod
Dec 22 13:52:26.446: INFO: Waiting for pod downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c to disappear
Dec 22 13:52:26.452: INFO: Pod downwardapi-volume-ea1a1702-503e-410e-ac76-4a4278a9674c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:52:26.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4625" for this suite.
Dec 22 13:52:34.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:52:34.666: INFO: namespace projected-4625 deletion completed in 8.207068888s

• [SLOW TEST:18.598 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:52:34.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 22 13:52:34.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7101,SelfLink:/api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed,UID:f41b16fd-7356-46f0-9090-7cdffc0d7073,ResourceVersion:17644441,Generation:0,CreationTimestamp:2019-12-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:52:34.832: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7101,SelfLink:/api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed,UID:f41b16fd-7356-46f0-9090-7cdffc0d7073,ResourceVersion:17644442,Generation:0,CreationTimestamp:2019-12-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 22 13:52:34.832: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7101,SelfLink:/api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed,UID:f41b16fd-7356-46f0-9090-7cdffc0d7073,ResourceVersion:17644443,Generation:0,CreationTimestamp:2019-12-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 22 13:52:44.887: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7101,SelfLink:/api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed,UID:f41b16fd-7356-46f0-9090-7cdffc0d7073,ResourceVersion:17644459,Generation:0,CreationTimestamp:2019-12-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 13:52:44.888: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7101,SelfLink:/api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed,UID:f41b16fd-7356-46f0-9090-7cdffc0d7073,ResourceVersion:17644460,Generation:0,CreationTimestamp:2019-12-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 22 13:52:44.888: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7101,SelfLink:/api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed,UID:f41b16fd-7356-46f0-9090-7cdffc0d7073,ResourceVersion:17644461,Generation:0,CreationTimestamp:2019-12-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:52:44.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7101" for this suite.
Dec 22 13:52:50.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:52:51.089: INFO: namespace watch-7101 deletion completed in 6.194832238s

• [SLOW TEST:16.422 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:52:51.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-mthr
STEP: Creating a pod to test atomic-volume-subpath
Dec 22 13:52:51.456: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mthr" in namespace "subpath-351" to be "success or failure"
Dec 22 13:52:51.471: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.879179ms
Dec 22 13:52:53.483: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026426381s
Dec 22 13:52:55.489: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033117701s
Dec 22 13:52:57.498: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04159501s
Dec 22 13:52:59.505: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049238887s
Dec 22 13:53:01.512: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 10.056155374s
Dec 22 13:53:03.531: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 12.075139154s
Dec 22 13:53:05.538: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 14.082003409s
Dec 22 13:53:07.548: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 16.09176852s
Dec 22 13:53:09.559: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 18.102723097s
Dec 22 13:53:11.569: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 20.113166218s
Dec 22 13:53:13.581: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 22.124806669s
Dec 22 13:53:15.594: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 24.138091474s
Dec 22 13:53:17.611: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 26.154796907s
Dec 22 13:53:19.621: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 28.165122978s
Dec 22 13:53:21.632: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Running", Reason="", readiness=true. Elapsed: 30.175359462s
Dec 22 13:53:23.643: INFO: Pod "pod-subpath-test-downwardapi-mthr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.1868629s
STEP: Saw pod success
Dec 22 13:53:23.643: INFO: Pod "pod-subpath-test-downwardapi-mthr" satisfied condition "success or failure"
Dec 22 13:53:23.648: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-mthr container test-container-subpath-downwardapi-mthr: 
STEP: delete the pod
Dec 22 13:53:23.729: INFO: Waiting for pod pod-subpath-test-downwardapi-mthr to disappear
Dec 22 13:53:23.737: INFO: Pod pod-subpath-test-downwardapi-mthr no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-mthr
Dec 22 13:53:23.737: INFO: Deleting pod "pod-subpath-test-downwardapi-mthr" in namespace "subpath-351"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:53:23.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-351" for this suite.
Dec 22 13:53:29.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:53:29.934: INFO: namespace subpath-351 deletion completed in 6.184418863s

• [SLOW TEST:38.845 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:53:29.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-82ca63fd-139d-4905-8a84-9d49287faf12
STEP: Creating a pod to test consume secrets
Dec 22 13:53:30.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4" in namespace "projected-5412" to be "success or failure"
Dec 22 13:53:30.075: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 25.796211ms
Dec 22 13:53:32.079: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030306111s
Dec 22 13:53:34.095: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045701946s
Dec 22 13:53:36.103: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054366502s
Dec 22 13:53:38.111: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061908667s
Dec 22 13:53:40.119: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069541757s
STEP: Saw pod success
Dec 22 13:53:40.119: INFO: Pod "pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4" satisfied condition "success or failure"
Dec 22 13:53:40.124: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 13:53:40.183: INFO: Waiting for pod pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4 to disappear
Dec 22 13:53:40.209: INFO: Pod pod-projected-secrets-be9452f4-5e8b-48cc-8292-50a4f1ca1ec4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:53:40.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5412" for this suite.
Dec 22 13:53:46.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:53:46.442: INFO: namespace projected-5412 deletion completed in 6.225494387s

• [SLOW TEST:16.508 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:53:46.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-5e03503f-cf41-42e4-9a6b-850afe9bfa74
STEP: Creating a pod to test consume configMaps
Dec 22 13:53:46.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171" in namespace "projected-1681" to be "success or failure"
Dec 22 13:53:46.606: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171": Phase="Pending", Reason="", readiness=false. Elapsed: 38.878939ms
Dec 22 13:53:48.616: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048643385s
Dec 22 13:53:50.629: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062046701s
Dec 22 13:53:52.641: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074034247s
Dec 22 13:53:54.656: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088959346s
Dec 22 13:53:56.663: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095954138s
STEP: Saw pod success
Dec 22 13:53:56.663: INFO: Pod "pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171" satisfied condition "success or failure"
Dec 22 13:53:56.666: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 13:53:56.808: INFO: Waiting for pod pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171 to disappear
Dec 22 13:53:56.813: INFO: Pod pod-projected-configmaps-213cd57e-5494-4d93-aef6-519813ab4171 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:53:56.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1681" for this suite.
Dec 22 13:54:02.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:54:03.040: INFO: namespace projected-1681 deletion completed in 6.223842307s

• [SLOW TEST:16.598 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:54:03.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 22 13:54:03.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644651,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:54:03.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644651,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 22 13:54:13.206: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644665,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 22 13:54:13.206: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644665,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 22 13:54:23.224: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644678,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 13:54:23.224: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644678,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 22 13:54:33.240: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644692,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 13:54:33.240: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-a,UID:f844b57a-c3da-4e77-8c53-dda0db73bf6c,ResourceVersion:17644692,Generation:0,CreationTimestamp:2019-12-22 13:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 22 13:54:43.255: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-b,UID:e52ee7a5-7db2-4fc4-8265-bd73da0e5828,ResourceVersion:17644706,Generation:0,CreationTimestamp:2019-12-22 13:54:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:54:43.256: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-b,UID:e52ee7a5-7db2-4fc4-8265-bd73da0e5828,ResourceVersion:17644706,Generation:0,CreationTimestamp:2019-12-22 13:54:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 22 13:54:53.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-b,UID:e52ee7a5-7db2-4fc4-8265-bd73da0e5828,ResourceVersion:17644722,Generation:0,CreationTimestamp:2019-12-22 13:54:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:54:53.280: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9958,SelfLink:/api/v1/namespaces/watch-9958/configmaps/e2e-watch-test-configmap-b,UID:e52ee7a5-7db2-4fc4-8265-bd73da0e5828,ResourceVersion:17644722,Generation:0,CreationTimestamp:2019-12-22 13:54:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:55:03.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9958" for this suite.
Dec 22 13:55:11.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:55:11.520: INFO: namespace watch-9958 deletion completed in 8.23243334s

• [SLOW TEST:68.479 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:55:11.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:55:11.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738" in namespace "downward-api-7933" to be "success or failure"
Dec 22 13:55:11.675: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738": Phase="Pending", Reason="", readiness=false. Elapsed: 17.80867ms
Dec 22 13:55:13.689: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03222255s
Dec 22 13:55:15.738: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080750815s
Dec 22 13:55:18.019: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361887506s
Dec 22 13:55:20.028: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371077226s
Dec 22 13:55:22.036: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378695839s
STEP: Saw pod success
Dec 22 13:55:22.036: INFO: Pod "downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738" satisfied condition "success or failure"
Dec 22 13:55:22.040: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738 container client-container: 
STEP: delete the pod
Dec 22 13:55:22.368: INFO: Waiting for pod downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738 to disappear
Dec 22 13:55:22.375: INFO: Pod downwardapi-volume-25aa0450-cd74-4493-808b-b8f20a4c9738 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:55:22.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7933" for this suite.
Dec 22 13:55:28.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:55:28.562: INFO: namespace downward-api-7933 deletion completed in 6.176559326s

• [SLOW TEST:17.042 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:55:28.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 22 13:55:37.235: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6a321119-1606-4926-8985-ef20c6f4c750"
Dec 22 13:55:37.235: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6a321119-1606-4926-8985-ef20c6f4c750" in namespace "pods-2982" to be "terminated due to deadline exceeded"
Dec 22 13:55:37.249: INFO: Pod "pod-update-activedeadlineseconds-6a321119-1606-4926-8985-ef20c6f4c750": Phase="Running", Reason="", readiness=true. Elapsed: 13.298354ms
Dec 22 13:55:39.255: INFO: Pod "pod-update-activedeadlineseconds-6a321119-1606-4926-8985-ef20c6f4c750": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.019351036s
Dec 22 13:55:39.255: INFO: Pod "pod-update-activedeadlineseconds-6a321119-1606-4926-8985-ef20c6f4c750" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:55:39.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2982" for this suite.
Dec 22 13:55:45.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:55:45.385: INFO: namespace pods-2982 deletion completed in 6.123137367s

• [SLOW TEST:16.822 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:55:45.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:55:45.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186" in namespace "downward-api-6902" to be "success or failure"
Dec 22 13:55:45.550: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186": Phase="Pending", Reason="", readiness=false. Elapsed: 11.2124ms
Dec 22 13:55:47.558: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019186353s
Dec 22 13:55:49.579: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040063593s
Dec 22 13:55:51.587: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048026224s
Dec 22 13:55:53.621: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082215707s
Dec 22 13:55:55.633: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093529928s
STEP: Saw pod success
Dec 22 13:55:55.633: INFO: Pod "downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186" satisfied condition "success or failure"
Dec 22 13:55:55.638: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186 container client-container: 
STEP: delete the pod
Dec 22 13:55:55.895: INFO: Waiting for pod downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186 to disappear
Dec 22 13:55:55.917: INFO: Pod downwardapi-volume-4886ff8e-5cba-4e01-acaa-e10e87204186 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:55:55.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6902" for this suite.
Dec 22 13:56:02.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:56:02.176: INFO: namespace downward-api-6902 deletion completed in 6.210337723s

• [SLOW TEST:16.791 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:56:02.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 22 13:56:02.380: INFO: Waiting up to 5m0s for pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43" in namespace "emptydir-5695" to be "success or failure"
Dec 22 13:56:02.388: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Pending", Reason="", readiness=false. Elapsed: 7.104826ms
Dec 22 13:56:04.397: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016544001s
Dec 22 13:56:06.404: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02306095s
Dec 22 13:56:08.415: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034793373s
Dec 22 13:56:10.422: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041322289s
Dec 22 13:56:12.443: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06212534s
Dec 22 13:56:14.452: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.071461798s
STEP: Saw pod success
Dec 22 13:56:14.452: INFO: Pod "pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43" satisfied condition "success or failure"
Dec 22 13:56:14.459: INFO: Trying to get logs from node iruya-node pod pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43 container test-container: 
STEP: delete the pod
Dec 22 13:56:14.625: INFO: Waiting for pod pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43 to disappear
Dec 22 13:56:14.639: INFO: Pod pod-24cf8068-3f8a-4fdc-ae20-0762952a0b43 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:56:14.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5695" for this suite.
Dec 22 13:56:20.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:56:20.833: INFO: namespace emptydir-5695 deletion completed in 6.186637234s

• [SLOW TEST:18.657 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:56:20.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5a5febef-cf91-4e7f-be63-b26eb3f1410d
STEP: Creating a pod to test consume secrets
Dec 22 13:56:20.956: INFO: Waiting up to 5m0s for pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160" in namespace "secrets-6088" to be "success or failure"
Dec 22 13:56:20.970: INFO: Pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160": Phase="Pending", Reason="", readiness=false. Elapsed: 14.498162ms
Dec 22 13:56:23.013: INFO: Pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057572342s
Dec 22 13:56:25.019: INFO: Pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062964661s
Dec 22 13:56:27.050: INFO: Pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093904461s
Dec 22 13:56:29.057: INFO: Pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101318011s
STEP: Saw pod success
Dec 22 13:56:29.057: INFO: Pod "pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160" satisfied condition "success or failure"
Dec 22 13:56:29.060: INFO: Trying to get logs from node iruya-node pod pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160 container secret-env-test: 
STEP: delete the pod
Dec 22 13:56:29.170: INFO: Waiting for pod pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160 to disappear
Dec 22 13:56:29.183: INFO: Pod pod-secrets-6720a339-bd8d-474d-ab7d-ff89f997c160 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:56:29.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6088" for this suite.
Dec 22 13:56:35.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:56:35.337: INFO: namespace secrets-6088 deletion completed in 6.148714868s

• [SLOW TEST:14.504 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:56:35.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:56:44.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1149" for this suite.
Dec 22 13:57:24.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:57:24.719: INFO: namespace replication-controller-1149 deletion completed in 40.167905766s

• [SLOW TEST:49.382 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:57:24.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 13:57:24.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:57:34.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2207" for this suite.
Dec 22 13:58:36.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:58:37.092: INFO: namespace pods-2207 deletion completed in 1m2.14128737s

• [SLOW TEST:72.373 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:58:37.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 22 13:58:37.233: INFO: Waiting up to 5m0s for pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a" in namespace "downward-api-3302" to be "success or failure"
Dec 22 13:58:37.420: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a": Phase="Pending", Reason="", readiness=false. Elapsed: 186.933893ms
Dec 22 13:58:39.434: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200988722s
Dec 22 13:58:41.443: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209724997s
Dec 22 13:58:43.456: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222461499s
Dec 22 13:58:45.468: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234943922s
Dec 22 13:58:47.476: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.242794542s
STEP: Saw pod success
Dec 22 13:58:47.476: INFO: Pod "downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a" satisfied condition "success or failure"
Dec 22 13:58:47.479: INFO: Trying to get logs from node iruya-node pod downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a container dapi-container: 
STEP: delete the pod
Dec 22 13:58:47.651: INFO: Waiting for pod downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a to disappear
Dec 22 13:58:47.672: INFO: Pod downward-api-ad751914-672d-4ecc-9d77-ce7fb4d5b39a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:58:47.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3302" for this suite.
Dec 22 13:58:53.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:58:54.025: INFO: namespace downward-api-3302 deletion completed in 6.301066237s

• [SLOW TEST:16.932 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:58:54.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 13:58:54.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-471'
Dec 22 13:58:56.449: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 13:58:56.449: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 22 13:58:58.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-471'
Dec 22 13:58:58.627: INFO: stderr: ""
Dec 22 13:58:58.627: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 13:58:58.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-471" for this suite.
Dec 22 13:59:04.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:59:04.783: INFO: namespace kubectl-471 deletion completed in 6.144003723s

• [SLOW TEST:10.757 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 13:59:04.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-360
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 22 13:59:04.977: INFO: Found 0 stateful pods, waiting for 3
Dec 22 13:59:15.200: INFO: Found 2 stateful pods, waiting for 3
Dec 22 13:59:24.993: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:59:24.993: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:59:24.993: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 13:59:35.706: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:59:35.706: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:59:35.706: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 22 13:59:35.906: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 22 13:59:45.983: INFO: Updating stateful set ss2
Dec 22 13:59:45.995: INFO: Waiting for Pod statefulset-360/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 13:59:56.006: INFO: Waiting for Pod statefulset-360/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 22 14:00:06.190: INFO: Found 2 stateful pods, waiting for 3
Dec 22 14:00:16.201: INFO: Found 2 stateful pods, waiting for 3
Dec 22 14:00:26.198: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:00:26.198: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:00:26.198: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 22 14:00:26.235: INFO: Updating stateful set ss2
Dec 22 14:00:26.248: INFO: Waiting for Pod statefulset-360/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 14:00:36.283: INFO: Updating stateful set ss2
Dec 22 14:00:37.524: INFO: Waiting for StatefulSet statefulset-360/ss2 to complete update
Dec 22 14:00:37.524: INFO: Waiting for Pod statefulset-360/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 14:00:47.545: INFO: Waiting for StatefulSet statefulset-360/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 22 14:00:57.543: INFO: Deleting all statefulset in ns statefulset-360
Dec 22 14:00:57.548: INFO: Scaling statefulset ss2 to 0
Dec 22 14:01:27.609: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 14:01:27.615: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:01:27.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-360" for this suite.
Dec 22 14:01:35.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:01:35.801: INFO: namespace statefulset-360 deletion completed in 8.140401466s

• [SLOW TEST:151.018 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:01:35.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b8d97bdc-bb38-4ea8-a23e-758bbea01e2b
STEP: Creating a pod to test consume secrets
Dec 22 14:01:35.928: INFO: Waiting up to 5m0s for pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194" in namespace "secrets-363" to be "success or failure"
Dec 22 14:01:35.935: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194": Phase="Pending", Reason="", readiness=false. Elapsed: 7.314888ms
Dec 22 14:01:37.947: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018811055s
Dec 22 14:01:39.958: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030633744s
Dec 22 14:01:42.173: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245279186s
Dec 22 14:01:44.198: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270600419s
Dec 22 14:01:46.207: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27913986s
STEP: Saw pod success
Dec 22 14:01:46.207: INFO: Pod "pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194" satisfied condition "success or failure"
Dec 22 14:01:46.212: INFO: Trying to get logs from node iruya-node pod pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194 container secret-volume-test: 
STEP: delete the pod
Dec 22 14:01:46.345: INFO: Waiting for pod pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194 to disappear
Dec 22 14:01:46.351: INFO: Pod pod-secrets-9748fc22-268e-4f06-b9c8-78a38e142194 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:01:46.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-363" for this suite.
Dec 22 14:01:52.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:01:52.608: INFO: namespace secrets-363 deletion completed in 6.250293975s

• [SLOW TEST:16.806 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:01:52.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:01:52.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-419'
Dec 22 14:01:53.049: INFO: stderr: ""
Dec 22 14:01:53.049: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 22 14:01:53.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-419'
Dec 22 14:01:53.367: INFO: stderr: ""
Dec 22 14:01:53.367: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 22 14:01:54.423: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:01:54.423: INFO: Found 0 / 1
Dec 22 14:01:55.380: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:01:55.380: INFO: Found 0 / 1
Dec 22 14:01:56.381: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:01:56.381: INFO: Found 0 / 1
Dec 22 14:01:57.376: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:01:57.376: INFO: Found 0 / 1
Dec 22 14:01:58.395: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:01:58.395: INFO: Found 0 / 1
Dec 22 14:01:59.375: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:01:59.375: INFO: Found 0 / 1
Dec 22 14:02:00.397: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:02:00.397: INFO: Found 0 / 1
Dec 22 14:02:01.383: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:02:01.383: INFO: Found 0 / 1
Dec 22 14:02:02.379: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:02:02.379: INFO: Found 1 / 1
Dec 22 14:02:02.379: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 22 14:02:02.394: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:02:02.394: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 22 14:02:02.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-h9b5t --namespace=kubectl-419'
Dec 22 14:02:02.548: INFO: stderr: ""
Dec 22 14:02:02.548: INFO: stdout: "Name:           redis-master-h9b5t\nNamespace:      kubectl-419\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 22 Dec 2019 14:01:53 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://0e3d880dd4ff5c03bcc2ec55be10c36d81d45e9e7db3f17bb9ab3ccca519abdc\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 22 Dec 2019 14:02:00 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mslqm (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-mslqm:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-mslqm\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-419/redis-master-h9b5t to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Dec 22 14:02:02.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-419'
Dec 22 14:02:02.665: INFO: stderr: ""
Dec 22 14:02:02.665: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-419\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-h9b5t\n"
Dec 22 14:02:02.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-419'
Dec 22 14:02:02.745: INFO: stderr: ""
Dec 22 14:02:02.745: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-419\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.110.83.165\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 22 14:02:02.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 22 14:02:02.862: INFO: stderr: ""
Dec 22 14:02:02.862: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 22 Dec 2019 14:01:55 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 22 Dec 2019 14:01:55 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 22 Dec 2019 14:01:55 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 22 Dec 2019 14:01:55 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         140d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         71d\n  kubectl-419                redis-master-h9b5t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 22 14:02:02.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-419'
Dec 22 14:02:02.953: INFO: stderr: ""
Dec 22 14:02:02.953: INFO: stdout: "Name:         kubectl-419\nLabels:       e2e-framework=kubectl\n              e2e-run=7a2bb7a1-b7f7-44e5-a2e3-2b4959765b28\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:02:02.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-419" for this suite.
Dec 22 14:02:27.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:02:27.226: INFO: namespace kubectl-419 deletion completed in 24.269171337s

• [SLOW TEST:34.617 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:02:27.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 22 14:02:37.722: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:02:37.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6425" for this suite.
Dec 22 14:02:43.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:02:44.016: INFO: namespace container-runtime-6425 deletion completed in 6.153228765s

• [SLOW TEST:16.790 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:02:44.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1886
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 22 14:02:44.114: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 22 14:03:24.297: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-1886 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 14:03:24.297: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 14:03:24.776: INFO: Waiting for endpoints: map[]
Dec 22 14:03:24.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-1886 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 14:03:24.785: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 14:03:25.140: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:03:25.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1886" for this suite.
Dec 22 14:03:51.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:03:51.340: INFO: namespace pod-network-test-1886 deletion completed in 26.187394977s

• [SLOW TEST:67.323 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:03:51.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 22 14:03:51.429: INFO: Waiting up to 5m0s for pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac" in namespace "emptydir-3510" to be "success or failure"
Dec 22 14:03:51.435: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308226ms
Dec 22 14:03:53.442: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013626775s
Dec 22 14:03:55.449: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020051415s
Dec 22 14:03:57.462: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033453009s
Dec 22 14:03:59.467: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038582645s
Dec 22 14:04:01.473: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044372368s
STEP: Saw pod success
Dec 22 14:04:01.473: INFO: Pod "pod-5aa0eed3-c016-4800-855e-233ab67b84ac" satisfied condition "success or failure"
Dec 22 14:04:01.476: INFO: Trying to get logs from node iruya-node pod pod-5aa0eed3-c016-4800-855e-233ab67b84ac container test-container: 
STEP: delete the pod
Dec 22 14:04:01.574: INFO: Waiting for pod pod-5aa0eed3-c016-4800-855e-233ab67b84ac to disappear
Dec 22 14:04:01.587: INFO: Pod pod-5aa0eed3-c016-4800-855e-233ab67b84ac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:04:01.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3510" for this suite.
Dec 22 14:04:07.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:04:07.892: INFO: namespace emptydir-3510 deletion completed in 6.298908212s

• [SLOW TEST:16.552 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:04:07.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 22 14:04:07.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 22 14:04:08.108: INFO: stderr: ""
Dec 22 14:04:08.108: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:04:08.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3972" for this suite.
Dec 22 14:04:14.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:04:14.288: INFO: namespace kubectl-3972 deletion completed in 6.176377466s

• [SLOW TEST:6.396 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:04:14.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 14:04:14.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da" in namespace "projected-3170" to be "success or failure"
Dec 22 14:04:14.440: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da": Phase="Pending", Reason="", readiness=false. Elapsed: 30.498695ms
Dec 22 14:04:16.453: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044025102s
Dec 22 14:04:18.467: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057774529s
Dec 22 14:04:20.482: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072410081s
Dec 22 14:04:22.490: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080752656s
Dec 22 14:04:24.503: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093414895s
STEP: Saw pod success
Dec 22 14:04:24.503: INFO: Pod "downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da" satisfied condition "success or failure"
Dec 22 14:04:24.507: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da container client-container: 
STEP: delete the pod
Dec 22 14:04:24.573: INFO: Waiting for pod downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da to disappear
Dec 22 14:04:24.594: INFO: Pod downwardapi-volume-cf1a173c-1819-4c9b-9204-03a5b89a13da no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:04:24.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3170" for this suite.
Dec 22 14:04:30.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:04:30.804: INFO: namespace projected-3170 deletion completed in 6.200256256s

• [SLOW TEST:16.515 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:04:30.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 22 14:04:30.953: INFO: Waiting up to 5m0s for pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4" in namespace "emptydir-7131" to be "success or failure"
Dec 22 14:04:31.022: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 68.559946ms
Dec 22 14:04:33.031: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078421711s
Dec 22 14:04:35.045: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091730621s
Dec 22 14:04:37.840: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886679842s
Dec 22 14:04:39.848: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894722892s
Dec 22 14:04:41.866: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.913199189s
STEP: Saw pod success
Dec 22 14:04:41.866: INFO: Pod "pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4" satisfied condition "success or failure"
Dec 22 14:04:41.880: INFO: Trying to get logs from node iruya-node pod pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4 container test-container: 
STEP: delete the pod
Dec 22 14:04:42.629: INFO: Waiting for pod pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4 to disappear
Dec 22 14:04:42.749: INFO: Pod pod-656cbd98-e7cb-4ccf-8846-4779670ad2c4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:04:42.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7131" for this suite.
Dec 22 14:04:48.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:04:48.875: INFO: namespace emptydir-7131 deletion completed in 6.12035044s

• [SLOW TEST:18.071 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:04:48.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 22 14:04:49.047: INFO: Waiting up to 5m0s for pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601" in namespace "downward-api-5640" to be "success or failure"
Dec 22 14:04:49.051: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601": Phase="Pending", Reason="", readiness=false. Elapsed: 3.958991ms
Dec 22 14:04:51.059: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012609s
Dec 22 14:04:53.122: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07552689s
Dec 22 14:04:55.136: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089130476s
Dec 22 14:04:57.148: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100682703s
Dec 22 14:04:59.156: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109269806s
STEP: Saw pod success
Dec 22 14:04:59.156: INFO: Pod "downward-api-9532ce32-1773-4618-bff5-ab87fd122601" satisfied condition "success or failure"
Dec 22 14:04:59.161: INFO: Trying to get logs from node iruya-node pod downward-api-9532ce32-1773-4618-bff5-ab87fd122601 container dapi-container: 
STEP: delete the pod
Dec 22 14:04:59.432: INFO: Waiting for pod downward-api-9532ce32-1773-4618-bff5-ab87fd122601 to disappear
Dec 22 14:04:59.444: INFO: Pod downward-api-9532ce32-1773-4618-bff5-ab87fd122601 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:04:59.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5640" for this suite.
Dec 22 14:05:05.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:05:05.754: INFO: namespace downward-api-5640 deletion completed in 6.300621452s

• [SLOW TEST:16.879 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:05:05.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:05:06.059: INFO: Create a RollingUpdate DaemonSet
Dec 22 14:05:06.074: INFO: Check that daemon pods launch on every node of the cluster
Dec 22 14:05:06.193: INFO: Number of nodes with available pods: 0
Dec 22 14:05:06.193: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:08.784: INFO: Number of nodes with available pods: 0
Dec 22 14:05:08.784: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:09.306: INFO: Number of nodes with available pods: 0
Dec 22 14:05:09.306: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:10.205: INFO: Number of nodes with available pods: 0
Dec 22 14:05:10.205: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:11.206: INFO: Number of nodes with available pods: 0
Dec 22 14:05:11.206: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:12.215: INFO: Number of nodes with available pods: 0
Dec 22 14:05:12.215: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:14.102: INFO: Number of nodes with available pods: 0
Dec 22 14:05:14.102: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:15.382: INFO: Number of nodes with available pods: 0
Dec 22 14:05:15.382: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:16.486: INFO: Number of nodes with available pods: 0
Dec 22 14:05:16.486: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:05:17.207: INFO: Number of nodes with available pods: 1
Dec 22 14:05:17.207: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 14:05:18.287: INFO: Number of nodes with available pods: 2
Dec 22 14:05:18.287: INFO: Number of running nodes: 2, number of available pods: 2
Dec 22 14:05:18.287: INFO: Update the DaemonSet to trigger a rollout
Dec 22 14:05:18.299: INFO: Updating DaemonSet daemon-set
Dec 22 14:05:28.382: INFO: Roll back the DaemonSet before rollout is complete
Dec 22 14:05:28.401: INFO: Updating DaemonSet daemon-set
Dec 22 14:05:28.401: INFO: Make sure DaemonSet rollback is complete
Dec 22 14:05:28.417: INFO: Wrong image for pod: daemon-set-mhllj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 22 14:05:28.417: INFO: Pod daemon-set-mhllj is not available
Dec 22 14:05:29.575: INFO: Wrong image for pod: daemon-set-mhllj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 22 14:05:29.576: INFO: Pod daemon-set-mhllj is not available
Dec 22 14:05:30.601: INFO: Wrong image for pod: daemon-set-mhllj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 22 14:05:30.601: INFO: Pod daemon-set-mhllj is not available
Dec 22 14:05:31.451: INFO: Wrong image for pod: daemon-set-mhllj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 22 14:05:31.452: INFO: Pod daemon-set-mhllj is not available
Dec 22 14:05:32.455: INFO: Wrong image for pod: daemon-set-mhllj. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 22 14:05:32.455: INFO: Pod daemon-set-mhllj is not available
Dec 22 14:05:33.449: INFO: Pod daemon-set-tsrpx is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4436, will wait for the garbage collector to delete the pods
Dec 22 14:05:33.605: INFO: Deleting DaemonSet.extensions daemon-set took: 57.161413ms
Dec 22 14:05:34.606: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000493845s
Dec 22 14:05:46.613: INFO: Number of nodes with available pods: 0
Dec 22 14:05:46.613: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 14:05:46.617: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4436/daemonsets","resourceVersion":"17646410"},"items":null}

Dec 22 14:05:46.621: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4436/pods","resourceVersion":"17646410"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:05:46.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4436" for this suite.
Dec 22 14:05:52.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:05:52.783: INFO: namespace daemonsets-4436 deletion completed in 6.148024859s

• [SLOW TEST:47.029 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:05:52.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 14:05:52.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488" in namespace "projected-4534" to be "success or failure"
Dec 22 14:05:52.905: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964475ms
Dec 22 14:05:54.912: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017969012s
Dec 22 14:05:56.920: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025780796s
Dec 22 14:05:58.933: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039066963s
Dec 22 14:06:00.946: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051785586s
Dec 22 14:06:02.954: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059423359s
STEP: Saw pod success
Dec 22 14:06:02.954: INFO: Pod "downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488" satisfied condition "success or failure"
Dec 22 14:06:02.957: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488 container client-container: 
STEP: delete the pod
Dec 22 14:06:03.012: INFO: Waiting for pod downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488 to disappear
Dec 22 14:06:03.020: INFO: Pod downwardapi-volume-a5599946-1959-4f78-957f-2c1824f49488 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:06:03.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4534" for this suite.
Dec 22 14:06:09.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:06:09.177: INFO: namespace projected-4534 deletion completed in 6.150684825s

• [SLOW TEST:16.394 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:06:09.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1222 14:06:27.312801       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 14:06:27.312: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:06:27.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5503" for this suite.
Dec 22 14:06:48.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:06:48.361: INFO: namespace gc-5503 deletion completed in 17.346004488s

• [SLOW TEST:39.183 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:06:48.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 22 14:06:48.535: INFO: Waiting up to 5m0s for pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4" in namespace "downward-api-3618" to be "success or failure"
Dec 22 14:06:48.586: INFO: Pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4": Phase="Pending", Reason="", readiness=false. Elapsed: 51.196155ms
Dec 22 14:06:50.607: INFO: Pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071914072s
Dec 22 14:06:52.613: INFO: Pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078220903s
Dec 22 14:06:54.624: INFO: Pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089148443s
Dec 22 14:06:56.632: INFO: Pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096567742s
STEP: Saw pod success
Dec 22 14:06:56.632: INFO: Pod "downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4" satisfied condition "success or failure"
Dec 22 14:06:56.635: INFO: Trying to get logs from node iruya-node pod downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4 container dapi-container: 
STEP: delete the pod
Dec 22 14:06:56.679: INFO: Waiting for pod downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4 to disappear
Dec 22 14:06:56.683: INFO: Pod downward-api-4b4895d6-ca7f-404a-8f47-703eeda351f4 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:06:56.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3618" for this suite.
Dec 22 14:07:02.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:07:02.882: INFO: namespace downward-api-3618 deletion completed in 6.193833358s

• [SLOW TEST:14.520 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:07:02.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:07:03.077: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:07:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9986" for this suite.
Dec 22 14:07:10.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:07:10.427: INFO: namespace custom-resource-definition-9986 deletion completed in 6.173128524s

• [SLOW TEST:7.545 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:07:10.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-270733b7-34e2-4906-983d-a946ded57a29
STEP: Creating secret with name s-test-opt-upd-c6421096-2b68-4019-8762-b2c72971face
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-270733b7-34e2-4906-983d-a946ded57a29
STEP: Updating secret s-test-opt-upd-c6421096-2b68-4019-8762-b2c72971face
STEP: Creating secret with name s-test-opt-create-a4e98e2b-666c-4da9-aeac-b38157b31b62
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:07:26.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-137" for this suite.
Dec 22 14:07:48.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:07:49.027: INFO: namespace secrets-137 deletion completed in 22.11796384s

• [SLOW TEST:38.600 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:07:49.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 22 14:07:49.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4245'
Dec 22 14:07:50.489: INFO: stderr: ""
Dec 22 14:07:50.490: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 22 14:07:51.500: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:51.500: INFO: Found 0 / 1
Dec 22 14:07:52.502: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:52.502: INFO: Found 0 / 1
Dec 22 14:07:53.503: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:53.503: INFO: Found 0 / 1
Dec 22 14:07:54.501: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:54.501: INFO: Found 0 / 1
Dec 22 14:07:55.499: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:55.499: INFO: Found 0 / 1
Dec 22 14:07:56.512: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:56.512: INFO: Found 0 / 1
Dec 22 14:07:57.498: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:57.498: INFO: Found 0 / 1
Dec 22 14:07:58.508: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:58.509: INFO: Found 1 / 1
Dec 22 14:07:58.509: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 22 14:07:58.517: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:58.517: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 22 14:07:58.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-4fbjc --namespace=kubectl-4245 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 22 14:07:58.738: INFO: stderr: ""
Dec 22 14:07:58.738: INFO: stdout: "pod/redis-master-4fbjc patched\n"
STEP: checking annotations
Dec 22 14:07:58.756: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:07:58.756: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:07:58.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4245" for this suite.
Dec 22 14:08:20.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:08:21.035: INFO: namespace kubectl-4245 deletion completed in 22.258109981s

• [SLOW TEST:32.007 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:08:21.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 22 14:08:21.180: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5098" to be "success or failure"
Dec 22 14:08:21.224: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 43.817119ms
Dec 22 14:08:23.238: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057789869s
Dec 22 14:08:25.294: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113496569s
Dec 22 14:08:27.301: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120419828s
Dec 22 14:08:29.310: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130167333s
Dec 22 14:08:31.323: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142697436s
Dec 22 14:08:33.335: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.155403099s
STEP: Saw pod success
Dec 22 14:08:33.336: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 22 14:08:33.342: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 22 14:08:33.553: INFO: Waiting for pod pod-host-path-test to disappear
Dec 22 14:08:33.565: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:08:33.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5098" for this suite.
Dec 22 14:08:39.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:08:39.882: INFO: namespace hostpath-5098 deletion completed in 6.304394394s

• [SLOW TEST:18.847 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:08:39.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:08:39.970: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 22 14:08:43.658: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:08:43.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8949" for this suite.
Dec 22 14:08:57.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:08:58.062: INFO: namespace replication-controller-8949 deletion completed in 14.160743765s

• [SLOW TEST:18.179 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:08:58.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 22 14:09:08.770: INFO: Successfully updated pod "labelsupdate491def07-965f-4216-84d2-94a260b4d50f"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:09:10.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1646" for this suite.
Dec 22 14:09:32.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:09:33.049: INFO: namespace downward-api-1646 deletion completed in 22.184618014s

• [SLOW TEST:34.986 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:09:33.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:09:33.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3079" for this suite.
Dec 22 14:09:55.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:09:55.484: INFO: namespace pods-3079 deletion completed in 22.183173792s

• [SLOW TEST:22.435 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:09:55.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:10:05.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4483" for this suite.
Dec 22 14:11:07.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:11:07.863: INFO: namespace kubelet-test-4483 deletion completed in 1m2.190090342s

• [SLOW TEST:72.379 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:11:07.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 22 14:11:16.041: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-de58e825-44b3-46ae-b945-dd4684e4a9a0,GenerateName:,Namespace:events-7527,SelfLink:/api/v1/namespaces/events-7527/pods/send-events-de58e825-44b3-46ae-b945-dd4684e4a9a0,UID:1586841c-e4bc-4a94-8361-5d579245b606,ResourceVersion:17647308,Generation:0,CreationTimestamp:2019-12-22 14:11:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 5094973,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rxlr9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rxlr9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-rxlr9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00288c9d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00288c9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:11:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:11:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:11:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:11:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-22 14:11:08 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-22 14:11:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://cb6951d3dca53997cac8e6afdace2243e0fa3918661881cff4afbc69d660f7f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 22 14:11:18.047: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 22 14:11:20.054: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:11:20.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7527" for this suite.
Dec 22 14:12:16.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:12:16.234: INFO: namespace events-7527 deletion completed in 56.13998977s

• [SLOW TEST:68.370 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:12:16.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 22 14:12:16.342: INFO: Waiting up to 5m0s for pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661" in namespace "emptydir-7152" to be "success or failure"
Dec 22 14:12:16.472: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661": Phase="Pending", Reason="", readiness=false. Elapsed: 130.166729ms
Dec 22 14:12:18.485: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142839063s
Dec 22 14:12:20.493: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150415317s
Dec 22 14:12:22.503: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160871452s
Dec 22 14:12:24.516: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661": Phase="Running", Reason="", readiness=true. Elapsed: 8.174118102s
Dec 22 14:12:26.528: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185709244s
STEP: Saw pod success
Dec 22 14:12:26.528: INFO: Pod "pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661" satisfied condition "success or failure"
Dec 22 14:12:26.534: INFO: Trying to get logs from node iruya-node pod pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661 container test-container: 
STEP: delete the pod
Dec 22 14:12:26.647: INFO: Waiting for pod pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661 to disappear
Dec 22 14:12:26.682: INFO: Pod pod-37ec0909-4d5a-4dfa-9bdd-d85577cae661 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:12:26.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7152" for this suite.
Dec 22 14:12:32.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:12:33.072: INFO: namespace emptydir-7152 deletion completed in 6.380946597s

• [SLOW TEST:16.837 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:12:33.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ce3d12ec-1af2-4ba8-9b40-9b289e92c2c7
STEP: Creating a pod to test consume secrets
Dec 22 14:12:33.153: INFO: Waiting up to 5m0s for pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7" in namespace "secrets-1746" to be "success or failure"
Dec 22 14:12:33.180: INFO: Pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.598351ms
Dec 22 14:12:35.192: INFO: Pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038595745s
Dec 22 14:12:37.198: INFO: Pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044914642s
Dec 22 14:12:39.206: INFO: Pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052359559s
Dec 22 14:12:41.250: INFO: Pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096871712s
STEP: Saw pod success
Dec 22 14:12:41.250: INFO: Pod "pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7" satisfied condition "success or failure"
Dec 22 14:12:41.261: INFO: Trying to get logs from node iruya-node pod pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7 container secret-volume-test: 
STEP: delete the pod
Dec 22 14:12:41.301: INFO: Waiting for pod pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7 to disappear
Dec 22 14:12:41.310: INFO: Pod pod-secrets-d8432e80-7222-446c-966a-ac02c9d704f7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:12:41.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1746" for this suite.
Dec 22 14:12:47.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:12:47.515: INFO: namespace secrets-1746 deletion completed in 6.196160662s

• [SLOW TEST:14.443 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:12:47.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-4ace9843-30fe-407e-aacb-bfa3bac8c0ae
STEP: Creating a pod to test consume configMaps
Dec 22 14:12:47.632: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a" in namespace "projected-6543" to be "success or failure"
Dec 22 14:12:47.665: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.033361ms
Dec 22 14:12:49.676: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043581465s
Dec 22 14:12:51.685: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052903592s
Dec 22 14:12:53.696: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063602365s
Dec 22 14:12:55.702: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070270179s
Dec 22 14:12:57.712: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Running", Reason="", readiness=true. Elapsed: 10.079830333s
Dec 22 14:12:59.720: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087538006s
STEP: Saw pod success
Dec 22 14:12:59.720: INFO: Pod "pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a" satisfied condition "success or failure"
Dec 22 14:12:59.725: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 14:12:59.779: INFO: Waiting for pod pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a to disappear
Dec 22 14:12:59.897: INFO: Pod pod-projected-configmaps-3bf38a42-9258-4daf-9264-7f4a7299932a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:12:59.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6543" for this suite.
Dec 22 14:13:05.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:13:06.102: INFO: namespace projected-6543 deletion completed in 6.19881748s

• [SLOW TEST:18.586 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:13:06.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 22 14:13:06.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6807'
Dec 22 14:13:08.613: INFO: stderr: ""
Dec 22 14:13:08.613: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 14:13:08.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6807'
Dec 22 14:13:08.814: INFO: stderr: ""
Dec 22 14:13:08.814: INFO: stdout: "update-demo-nautilus-bm5vd update-demo-nautilus-l9769 "
Dec 22 14:13:08.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bm5vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:08.967: INFO: stderr: ""
Dec 22 14:13:08.967: INFO: stdout: ""
Dec 22 14:13:08.967: INFO: update-demo-nautilus-bm5vd is created but not running
Dec 22 14:13:13.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6807'
Dec 22 14:13:14.119: INFO: stderr: ""
Dec 22 14:13:14.119: INFO: stdout: "update-demo-nautilus-bm5vd update-demo-nautilus-l9769 "
Dec 22 14:13:14.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bm5vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:15.191: INFO: stderr: ""
Dec 22 14:13:15.191: INFO: stdout: ""
Dec 22 14:13:15.191: INFO: update-demo-nautilus-bm5vd is created but not running
Dec 22 14:13:20.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6807'
Dec 22 14:13:20.365: INFO: stderr: ""
Dec 22 14:13:20.366: INFO: stdout: "update-demo-nautilus-bm5vd update-demo-nautilus-l9769 "
Dec 22 14:13:20.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bm5vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:20.458: INFO: stderr: ""
Dec 22 14:13:20.458: INFO: stdout: "true"
Dec 22 14:13:20.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bm5vd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:20.533: INFO: stderr: ""
Dec 22 14:13:20.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 14:13:20.533: INFO: validating pod update-demo-nautilus-bm5vd
Dec 22 14:13:20.548: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 14:13:20.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 14:13:20.548: INFO: update-demo-nautilus-bm5vd is verified up and running
Dec 22 14:13:20.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9769 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:20.620: INFO: stderr: ""
Dec 22 14:13:20.620: INFO: stdout: "true"
Dec 22 14:13:20.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9769 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:20.725: INFO: stderr: ""
Dec 22 14:13:20.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 14:13:20.725: INFO: validating pod update-demo-nautilus-l9769
Dec 22 14:13:20.751: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 14:13:20.751: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 14:13:20.751: INFO: update-demo-nautilus-l9769 is verified up and running
STEP: rolling-update to new replication controller
Dec 22 14:13:20.753: INFO: scanned /root for discovery docs: 
Dec 22 14:13:20.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6807'
Dec 22 14:13:53.690: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 22 14:13:53.690: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 14:13:53.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6807'
Dec 22 14:13:53.846: INFO: stderr: ""
Dec 22 14:13:53.847: INFO: stdout: "update-demo-kitten-4p52r update-demo-kitten-ht6nn update-demo-nautilus-l9769 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 22 14:13:58.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6807'
Dec 22 14:13:58.990: INFO: stderr: ""
Dec 22 14:13:58.990: INFO: stdout: "update-demo-kitten-4p52r update-demo-kitten-ht6nn "
Dec 22 14:13:58.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4p52r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:59.106: INFO: stderr: ""
Dec 22 14:13:59.106: INFO: stdout: "true"
Dec 22 14:13:59.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4p52r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:59.198: INFO: stderr: ""
Dec 22 14:13:59.198: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 22 14:13:59.198: INFO: validating pod update-demo-kitten-4p52r
Dec 22 14:13:59.217: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 22 14:13:59.217: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 22 14:13:59.217: INFO: update-demo-kitten-4p52r is verified up and running
Dec 22 14:13:59.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ht6nn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:59.298: INFO: stderr: ""
Dec 22 14:13:59.299: INFO: stdout: "true"
Dec 22 14:13:59.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ht6nn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6807'
Dec 22 14:13:59.404: INFO: stderr: ""
Dec 22 14:13:59.404: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 22 14:13:59.404: INFO: validating pod update-demo-kitten-ht6nn
Dec 22 14:13:59.428: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 22 14:13:59.428: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 22 14:13:59.428: INFO: update-demo-kitten-ht6nn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:13:59.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6807" for this suite.
Dec 22 14:14:27.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:14:27.601: INFO: namespace kubectl-6807 deletion completed in 28.168386698s

• [SLOW TEST:81.499 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:14:27.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 22 14:14:28.665: INFO: Pod name wrapped-volume-race-5eda9d20-7aec-48c1-afb4-b8be18092aca: Found 0 pods out of 5
Dec 22 14:14:33.685: INFO: Pod name wrapped-volume-race-5eda9d20-7aec-48c1-afb4-b8be18092aca: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5eda9d20-7aec-48c1-afb4-b8be18092aca in namespace emptydir-wrapper-3277, will wait for the garbage collector to delete the pods
Dec 22 14:14:59.888: INFO: Deleting ReplicationController wrapped-volume-race-5eda9d20-7aec-48c1-afb4-b8be18092aca took: 58.168759ms
Dec 22 14:15:00.288: INFO: Terminating ReplicationController wrapped-volume-race-5eda9d20-7aec-48c1-afb4-b8be18092aca pods took: 400.461824ms
STEP: Creating RC which spawns configmap-volume pods
Dec 22 14:15:46.819: INFO: Pod name wrapped-volume-race-40a948cf-2ad9-4b13-91b6-c2013f00eb39: Found 0 pods out of 5
Dec 22 14:15:51.832: INFO: Pod name wrapped-volume-race-40a948cf-2ad9-4b13-91b6-c2013f00eb39: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-40a948cf-2ad9-4b13-91b6-c2013f00eb39 in namespace emptydir-wrapper-3277, will wait for the garbage collector to delete the pods
Dec 22 14:16:29.985: INFO: Deleting ReplicationController wrapped-volume-race-40a948cf-2ad9-4b13-91b6-c2013f00eb39 took: 24.092116ms
Dec 22 14:16:30.386: INFO: Terminating ReplicationController wrapped-volume-race-40a948cf-2ad9-4b13-91b6-c2013f00eb39 pods took: 400.894306ms
STEP: Creating RC which spawns configmap-volume pods
Dec 22 14:17:17.084: INFO: Pod name wrapped-volume-race-557efb65-40e9-4e34-8d59-a0a0bbfda99f: Found 0 pods out of 5
Dec 22 14:17:22.103: INFO: Pod name wrapped-volume-race-557efb65-40e9-4e34-8d59-a0a0bbfda99f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-557efb65-40e9-4e34-8d59-a0a0bbfda99f in namespace emptydir-wrapper-3277, will wait for the garbage collector to delete the pods
Dec 22 14:17:56.224: INFO: Deleting ReplicationController wrapped-volume-race-557efb65-40e9-4e34-8d59-a0a0bbfda99f took: 34.158587ms
Dec 22 14:17:56.524: INFO: Terminating ReplicationController wrapped-volume-race-557efb65-40e9-4e34-8d59-a0a0bbfda99f pods took: 300.430509ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:18:48.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3277" for this suite.
Dec 22 14:18:58.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:18:58.319: INFO: namespace emptydir-wrapper-3277 deletion completed in 10.154662252s

• [SLOW TEST:270.718 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:18:58.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 22 14:18:58.397: INFO: Waiting up to 5m0s for pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be" in namespace "var-expansion-378" to be "success or failure"
Dec 22 14:18:58.413: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 16.312609ms
Dec 22 14:19:00.430: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032940903s
Dec 22 14:19:02.439: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042069492s
Dec 22 14:19:04.529: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132687619s
Dec 22 14:19:06.551: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154070762s
Dec 22 14:19:08.556: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159308664s
Dec 22 14:19:10.573: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175994503s
Dec 22 14:19:12.590: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Pending", Reason="", readiness=false. Elapsed: 14.193535831s
Dec 22 14:19:14.621: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.223955306s
STEP: Saw pod success
Dec 22 14:19:14.621: INFO: Pod "var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be" satisfied condition "success or failure"
Dec 22 14:19:14.631: INFO: Trying to get logs from node iruya-node pod var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be container dapi-container: 
STEP: delete the pod
Dec 22 14:19:14.792: INFO: Waiting for pod var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be to disappear
Dec 22 14:19:14.804: INFO: Pod var-expansion-5d2ee541-8823-4e3b-bd32-bf4afb2c65be no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:19:14.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-378" for this suite.
Dec 22 14:19:20.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:19:20.990: INFO: namespace var-expansion-378 deletion completed in 6.16460978s

• [SLOW TEST:22.670 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:19:20.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 22 14:22:25.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:25.564: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:27.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:27.572: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:29.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:29.573: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:31.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:31.579: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:33.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:33.574: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:35.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:35.575: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:37.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:37.577: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:39.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:39.575: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:41.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:41.586: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:43.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:43.604: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:45.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:45.574: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:47.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:47.573: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:49.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:49.577: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:51.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:51.573: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:53.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:53.577: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:55.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:55.575: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 14:22:57.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 14:22:57.573: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:22:57.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7962" for this suite.
Dec 22 14:23:21.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:23:21.878: INFO: namespace container-lifecycle-hook-7962 deletion completed in 24.295629421s

• [SLOW TEST:240.887 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:23:21.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.10.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.10.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.10.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.10.5_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.10.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.10.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.10.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.10.5_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 14:23:34.452: INFO: Unable to read wheezy_udp@dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.497: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.504: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.512: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.527: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.540: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.548: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.553: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.557: INFO: Unable to read 10.101.10.5_udp@PTR from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.561: INFO: Unable to read 10.101.10.5_tcp@PTR from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.565: INFO: Unable to read jessie_udp@dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.571: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.573: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.576: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.579: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9997.svc.cluster.local from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.581: INFO: Unable to read jessie_udp@PodARecord from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.585: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.587: INFO: Unable to read 10.101.10.5_udp@PTR from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.590: INFO: Unable to read 10.101.10.5_tcp@PTR from pod dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665: the server could not find the requested resource (get pods dns-test-93153c20-1b0d-48c5-a075-d2c786626665)
Dec 22 14:23:34.590: INFO: Lookups using dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665 failed for: [wheezy_udp@dns-test-service.dns-9997.svc.cluster.local wheezy_tcp@dns-test-service.dns-9997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9997.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9997.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.10.5_udp@PTR 10.101.10.5_tcp@PTR jessie_udp@dns-test-service.dns-9997.svc.cluster.local jessie_tcp@dns-test-service.dns-9997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9997.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9997.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9997.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.10.5_udp@PTR 10.101.10.5_tcp@PTR]

Dec 22 14:23:39.758: INFO: DNS probes using dns-9997/dns-test-93153c20-1b0d-48c5-a075-d2c786626665 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:23:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9997" for this suite.
Dec 22 14:23:46.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:23:46.316: INFO: namespace dns-9997 deletion completed in 6.108640022s

• [SLOW TEST:24.437 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:23:46.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:24:19.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5882" for this suite.
Dec 22 14:24:25.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:24:25.880: INFO: namespace namespaces-5882 deletion completed in 6.188521275s
STEP: Destroying namespace "nsdeletetest-2492" for this suite.
Dec 22 14:24:25.883: INFO: Namespace nsdeletetest-2492 was already deleted
STEP: Destroying namespace "nsdeletetest-4632" for this suite.
Dec 22 14:24:31.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:24:32.062: INFO: namespace nsdeletetest-4632 deletion completed in 6.178440799s

• [SLOW TEST:45.745 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:24:32.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-wxzpr in namespace proxy-4337
I1222 14:24:32.217644       8 runners.go:180] Created replication controller with name: proxy-service-wxzpr, namespace: proxy-4337, replica count: 1
I1222 14:24:33.268430       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:34.269090       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:35.269590       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:36.270026       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:37.270225       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:38.270473       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:39.270878       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 14:24:40.271159       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:41.271558       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:42.271830       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:43.272109       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:44.273901       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:45.276807       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:46.277145       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:47.277466       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 14:24:48.277780       8 runners.go:180] proxy-service-wxzpr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 22 14:24:48.305: INFO: setup took 16.165195601s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 22 14:24:48.332: INFO: (0) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 26.511969ms)
Dec 22 14:24:48.332: INFO: (0) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 26.620865ms)
Dec 22 14:24:48.332: INFO: (0) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 26.475464ms)
Dec 22 14:24:48.332: INFO: (0) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 26.577429ms)
Dec 22 14:24:48.332: INFO: (0) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 26.18505ms)
Dec 22 14:24:48.334: INFO: (0) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 27.894525ms)
Dec 22 14:24:48.334: INFO: (0) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 27.697559ms)
Dec 22 14:24:48.334: INFO: (0) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 28.25277ms)
Dec 22 14:24:48.336: INFO: (0) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 29.640507ms)
Dec 22 14:24:48.343: INFO: (0) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 36.333813ms)
Dec 22 14:24:48.345: INFO: (0) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 38.894202ms)
Dec 22 14:24:48.364: INFO: (0) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 56.939128ms)
Dec 22 14:24:48.364: INFO: (0) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 56.783433ms)
Dec 22 14:24:48.364: INFO: (0) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 12.963ms)
Dec 22 14:24:48.380: INFO: (1) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 13.592981ms)
Dec 22 14:24:48.380: INFO: (1) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 13.444903ms)
Dec 22 14:24:48.380: INFO: (1) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 18.371682ms)
Dec 22 14:24:48.385: INFO: (1) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 18.596301ms)
Dec 22 14:24:48.385: INFO: (1) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 18.907478ms)
Dec 22 14:24:48.386: INFO: (1) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 20.002621ms)
Dec 22 14:24:48.386: INFO: (1) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 19.963821ms)
Dec 22 14:24:48.386: INFO: (1) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 20.105949ms)
Dec 22 14:24:48.397: INFO: (2) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 10.466816ms)
Dec 22 14:24:48.397: INFO: (2) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 10.500522ms)
Dec 22 14:24:48.397: INFO: (2) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 10.514169ms)
Dec 22 14:24:48.397: INFO: (2) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 10.641868ms)
Dec 22 14:24:48.398: INFO: (2) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 11.069645ms)
Dec 22 14:24:48.398: INFO: (2) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 11.137058ms)
Dec 22 14:24:48.399: INFO: (2) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 12.345751ms)
Dec 22 14:24:48.400: INFO: (2) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 13.323811ms)
Dec 22 14:24:48.400: INFO: (2) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 28.647331ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 28.621473ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 29.004056ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 28.764827ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 29.032091ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 29.115ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 29.256264ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 29.718218ms)
Dec 22 14:24:48.435: INFO: (3) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 29.554781ms)
Dec 22 14:24:48.436: INFO: (3) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 29.926156ms)
Dec 22 14:24:48.437: INFO: (3) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 30.511791ms)
Dec 22 14:24:48.438: INFO: (3) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 31.766434ms)
Dec 22 14:24:48.445: INFO: (4) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 7.010275ms)
Dec 22 14:24:48.445: INFO: (4) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 7.344091ms)
Dec 22 14:24:48.445: INFO: (4) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 7.174817ms)
Dec 22 14:24:48.448: INFO: (4) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 11.273575ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 11.814302ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 11.947643ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 11.733842ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 11.829855ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 11.913026ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 11.616815ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 12.258905ms)
Dec 22 14:24:48.450: INFO: (4) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 11.934802ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 21.457943ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 21.588308ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 21.971529ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 21.936374ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 21.663434ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 21.791982ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 21.917493ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 22.145993ms)
Dec 22 14:24:48.472: INFO: (5) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 21.908365ms)
Dec 22 14:24:48.473: INFO: (5) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 22.509051ms)
Dec 22 14:24:48.473: INFO: (5) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 22.498672ms)
Dec 22 14:24:48.473: INFO: (5) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 22.743569ms)
Dec 22 14:24:48.473: INFO: (5) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 23.194113ms)
Dec 22 14:24:48.474: INFO: (5) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 11.393047ms)
Dec 22 14:24:48.486: INFO: (6) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 12.407614ms)
Dec 22 14:24:48.486: INFO: (6) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 12.208256ms)
Dec 22 14:24:48.488: INFO: (6) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 13.533856ms)
Dec 22 14:24:48.488: INFO: (6) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 14.118881ms)
Dec 22 14:24:48.488: INFO: (6) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 14.20902ms)
Dec 22 14:24:48.488: INFO: (6) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 14.147963ms)
Dec 22 14:24:48.488: INFO: (6) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 14.362592ms)
Dec 22 14:24:48.489: INFO: (6) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 14.662306ms)
Dec 22 14:24:48.489: INFO: (6) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 14.913722ms)
Dec 22 14:24:48.492: INFO: (6) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 17.754626ms)
Dec 22 14:24:48.492: INFO: (6) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 18.29388ms)
Dec 22 14:24:48.493: INFO: (6) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 18.671064ms)
Dec 22 14:24:48.493: INFO: (6) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 16.602354ms)
Dec 22 14:24:48.516: INFO: (7) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 16.230877ms)
Dec 22 14:24:48.517: INFO: (7) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 17.332725ms)
Dec 22 14:24:48.517: INFO: (7) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 17.660447ms)
Dec 22 14:24:48.517: INFO: (7) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 17.362427ms)
Dec 22 14:24:48.517: INFO: (7) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 17.676373ms)
Dec 22 14:24:48.517: INFO: (7) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 17.853605ms)
Dec 22 14:24:48.517: INFO: (7) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 17.6084ms)
Dec 22 14:24:48.521: INFO: (7) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 21.14434ms)
Dec 22 14:24:48.521: INFO: (7) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 21.357944ms)
Dec 22 14:24:48.521: INFO: (7) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 21.33636ms)
Dec 22 14:24:48.523: INFO: (7) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 23.479644ms)
Dec 22 14:24:48.538: INFO: (8) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 14.21369ms)
Dec 22 14:24:48.539: INFO: (8) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 15.863495ms)
Dec 22 14:24:48.539: INFO: (8) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 15.812397ms)
Dec 22 14:24:48.539: INFO: (8) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 15.952446ms)
Dec 22 14:24:48.540: INFO: (8) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 17.155937ms)
Dec 22 14:24:48.541: INFO: (8) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 17.351203ms)
Dec 22 14:24:48.541: INFO: (8) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 17.247637ms)
Dec 22 14:24:48.541: INFO: (8) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 23.979619ms)
Dec 22 14:24:48.547: INFO: (8) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 24.006548ms)
Dec 22 14:24:48.547: INFO: (8) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 23.848133ms)
Dec 22 14:24:48.547: INFO: (8) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 23.902773ms)
Dec 22 14:24:48.547: INFO: (8) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 23.954094ms)
Dec 22 14:24:48.549: INFO: (8) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 25.682727ms)
Dec 22 14:24:48.567: INFO: (9) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 17.518871ms)
Dec 22 14:24:48.567: INFO: (9) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 17.784515ms)
Dec 22 14:24:48.567: INFO: (9) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 17.417385ms)
Dec 22 14:24:48.568: INFO: (9) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 18.871419ms)
Dec 22 14:24:48.568: INFO: (9) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 18.527655ms)
Dec 22 14:24:48.568: INFO: (9) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 18.648177ms)
Dec 22 14:24:48.569: INFO: (9) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 19.620557ms)
Dec 22 14:24:48.570: INFO: (9) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 19.991322ms)
Dec 22 14:24:48.570: INFO: (9) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 17.459094ms)
Dec 22 14:24:48.595: INFO: (10) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 19.102965ms)
Dec 22 14:24:48.595: INFO: (10) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 19.54283ms)
Dec 22 14:24:48.595: INFO: (10) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 19.104255ms)
Dec 22 14:24:48.595: INFO: (10) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 20.19941ms)
Dec 22 14:24:48.595: INFO: (10) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 19.596159ms)
Dec 22 14:24:48.595: INFO: (10) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 18.8603ms)
Dec 22 14:24:48.596: INFO: (10) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 18.729303ms)
Dec 22 14:24:48.597: INFO: (10) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 21.119185ms)
Dec 22 14:24:48.597: INFO: (10) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 19.009299ms)
Dec 22 14:24:48.597: INFO: (10) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 19.119391ms)
Dec 22 14:24:48.605: INFO: (11) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 7.225548ms)
Dec 22 14:24:48.605: INFO: (11) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 7.952075ms)
Dec 22 14:24:48.614: INFO: (11) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 17.152728ms)
Dec 22 14:24:48.615: INFO: (11) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 17.507272ms)
Dec 22 14:24:48.615: INFO: (11) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 18.039349ms)
Dec 22 14:24:48.615: INFO: (11) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 17.656263ms)
Dec 22 14:24:48.616: INFO: (11) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 18.09734ms)
Dec 22 14:24:48.617: INFO: (11) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 19.634883ms)
Dec 22 14:24:48.617: INFO: (11) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 19.502623ms)
Dec 22 14:24:48.618: INFO: (11) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 20.645552ms)
Dec 22 14:24:48.618: INFO: (11) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 20.352999ms)
Dec 22 14:24:48.618: INFO: (11) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 20.36677ms)
Dec 22 14:24:48.618: INFO: (11) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 20.519508ms)
Dec 22 14:24:48.618: INFO: (11) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 20.917945ms)
Dec 22 14:24:48.618: INFO: (11) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 21.113197ms)
Dec 22 14:24:48.619: INFO: (11) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test<... (200; 23.998234ms)
Dec 22 14:24:48.643: INFO: (12) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 23.947216ms)
Dec 22 14:24:48.643: INFO: (12) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 24.342945ms)
Dec 22 14:24:48.645: INFO: (12) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 26.270108ms)
Dec 22 14:24:48.645: INFO: (12) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 26.529299ms)
Dec 22 14:24:48.646: INFO: (12) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 27.727748ms)
Dec 22 14:24:48.648: INFO: (12) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 28.766486ms)
Dec 22 14:24:48.648: INFO: (12) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 28.885683ms)
Dec 22 14:24:48.648: INFO: (12) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 29.049341ms)
Dec 22 14:24:48.648: INFO: (12) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 29.045063ms)
Dec 22 14:24:48.648: INFO: (12) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 29.404288ms)
Dec 22 14:24:48.648: INFO: (12) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 29.619069ms)
Dec 22 14:24:48.649: INFO: (12) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 30.450802ms)
Dec 22 14:24:48.650: INFO: (12) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 30.795817ms)
Dec 22 14:24:48.662: INFO: (13) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 11.790798ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 12.381307ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 12.579821ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 12.889595ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 13.549196ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 13.32846ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 13.59514ms)
Dec 22 14:24:48.663: INFO: (13) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 13.377605ms)
Dec 22 14:24:48.664: INFO: (13) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 13.687019ms)
Dec 22 14:24:48.664: INFO: (13) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test<... (200; 14.316314ms)
Dec 22 14:24:48.668: INFO: (13) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 17.808279ms)
Dec 22 14:24:48.668: INFO: (13) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 17.590102ms)
Dec 22 14:24:48.668: INFO: (13) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 17.786651ms)
Dec 22 14:24:48.669: INFO: (13) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 18.133543ms)
Dec 22 14:24:48.680: INFO: (14) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 11.16016ms)
Dec 22 14:24:48.681: INFO: (14) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 11.254932ms)
Dec 22 14:24:48.681: INFO: (14) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 12.308025ms)
Dec 22 14:24:48.683: INFO: (14) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 14.041781ms)
Dec 22 14:24:48.683: INFO: (14) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 13.821741ms)
Dec 22 14:24:48.684: INFO: (14) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 14.201851ms)
Dec 22 14:24:48.684: INFO: (14) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 14.585869ms)
Dec 22 14:24:48.684: INFO: (14) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 14.779095ms)
Dec 22 14:24:48.684: INFO: (14) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 14.513765ms)
Dec 22 14:24:48.684: INFO: (14) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 15.095201ms)
Dec 22 14:24:48.686: INFO: (14) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 16.069801ms)
Dec 22 14:24:48.686: INFO: (14) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 16.010703ms)
Dec 22 14:24:48.686: INFO: (14) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 16.913149ms)
Dec 22 14:24:48.686: INFO: (14) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 16.492319ms)
Dec 22 14:24:48.686: INFO: (14) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 17.062945ms)
Dec 22 14:24:48.698: INFO: (15) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 11.332305ms)
Dec 22 14:24:48.698: INFO: (15) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 11.249145ms)
Dec 22 14:24:48.698: INFO: (15) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 12.087583ms)
Dec 22 14:24:48.699: INFO: (15) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 12.415414ms)
Dec 22 14:24:48.699: INFO: (15) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 12.364568ms)
Dec 22 14:24:48.699: INFO: (15) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 12.374128ms)
Dec 22 14:24:48.699: INFO: (15) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 12.838021ms)
Dec 22 14:24:48.700: INFO: (15) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 12.81681ms)
Dec 22 14:24:48.701: INFO: (15) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 14.61316ms)
Dec 22 14:24:48.705: INFO: (15) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 18.950844ms)
Dec 22 14:24:48.707: INFO: (15) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 20.321204ms)
Dec 22 14:24:48.707: INFO: (15) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 20.47255ms)
Dec 22 14:24:48.707: INFO: (15) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 20.558998ms)
Dec 22 14:24:48.707: INFO: (15) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 20.443295ms)
Dec 22 14:24:48.708: INFO: (15) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 21.709415ms)
Dec 22 14:24:48.742: INFO: (16) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 33.911334ms)
Dec 22 14:24:48.742: INFO: (16) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 33.887257ms)
Dec 22 14:24:48.744: INFO: (16) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 34.962621ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 36.159455ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 36.167873ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 36.334518ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 36.106899ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 36.107503ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 36.15916ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 36.348868ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test (200; 36.468948ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 36.389514ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 36.588781ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 36.447159ms)
Dec 22 14:24:48.745: INFO: (16) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 36.689409ms)
Dec 22 14:24:48.755: INFO: (17) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 9.32425ms)
Dec 22 14:24:48.756: INFO: (17) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 10.247506ms)
Dec 22 14:24:48.757: INFO: (17) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 11.556621ms)
Dec 22 14:24:48.758: INFO: (17) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 11.791044ms)
Dec 22 14:24:48.758: INFO: (17) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:1080/proxy/: ... (200; 12.218291ms)
Dec 22 14:24:48.758: INFO: (17) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 12.866462ms)
Dec 22 14:24:48.759: INFO: (17) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 13.240924ms)
Dec 22 14:24:48.759: INFO: (17) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: test<... (200; 9.094852ms)
Dec 22 14:24:48.776: INFO: (18) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 9.821219ms)
Dec 22 14:24:48.777: INFO: (18) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 11.289802ms)
Dec 22 14:24:48.778: INFO: (18) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 11.486233ms)
Dec 22 14:24:48.778: INFO: (18) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 12.18053ms)
Dec 22 14:24:48.778: INFO: (18) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 12.504507ms)
Dec 22 14:24:48.781: INFO: (18) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 14.875925ms)
Dec 22 14:24:48.781: INFO: (18) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 14.659907ms)
Dec 22 14:24:48.781: INFO: (18) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 14.931763ms)
Dec 22 14:24:48.782: INFO: (18) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 16.116903ms)
Dec 22 14:24:48.782: INFO: (18) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 16.17517ms)
Dec 22 14:24:48.782: INFO: (18) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 16.243296ms)
Dec 22 14:24:48.782: INFO: (18) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 16.119951ms)
Dec 22 14:24:48.782: INFO: (18) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 16.121831ms)
Dec 22 14:24:48.788: INFO: (19) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:443/proxy/: ... (200; 5.661822ms)
Dec 22 14:24:48.788: INFO: (19) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:460/proxy/: tls baz (200; 5.667675ms)
Dec 22 14:24:48.791: INFO: (19) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname1/proxy/: foo (200; 8.43059ms)
Dec 22 14:24:48.791: INFO: (19) /api/v1/namespaces/proxy-4337/pods/https:proxy-service-wxzpr-pmq8t:462/proxy/: tls qux (200; 8.693371ms)
Dec 22 14:24:48.792: INFO: (19) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t/proxy/: test (200; 9.526757ms)
Dec 22 14:24:48.792: INFO: (19) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 9.530441ms)
Dec 22 14:24:48.792: INFO: (19) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 9.585261ms)
Dec 22 14:24:48.792: INFO: (19) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:160/proxy/: foo (200; 9.721097ms)
Dec 22 14:24:48.792: INFO: (19) /api/v1/namespaces/proxy-4337/pods/http:proxy-service-wxzpr-pmq8t:162/proxy/: bar (200; 9.913788ms)
Dec 22 14:24:48.793: INFO: (19) /api/v1/namespaces/proxy-4337/services/proxy-service-wxzpr:portname2/proxy/: bar (200; 10.407631ms)
Dec 22 14:24:48.793: INFO: (19) /api/v1/namespaces/proxy-4337/pods/proxy-service-wxzpr-pmq8t:1080/proxy/: test<... (200; 10.44738ms)
Dec 22 14:24:48.795: INFO: (19) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname1/proxy/: foo (200; 12.72885ms)
Dec 22 14:24:48.795: INFO: (19) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname2/proxy/: tls qux (200; 12.842288ms)
Dec 22 14:24:48.796: INFO: (19) /api/v1/namespaces/proxy-4337/services/http:proxy-service-wxzpr:portname2/proxy/: bar (200; 13.084971ms)
Dec 22 14:24:48.796: INFO: (19) /api/v1/namespaces/proxy-4337/services/https:proxy-service-wxzpr:tlsportname1/proxy/: tls baz (200; 13.342605ms)
STEP: deleting ReplicationController proxy-service-wxzpr in namespace proxy-4337, will wait for the garbage collector to delete the pods
Dec 22 14:24:48.870: INFO: Deleting ReplicationController proxy-service-wxzpr took: 21.633408ms
Dec 22 14:24:49.170: INFO: Terminating ReplicationController proxy-service-wxzpr pods took: 300.36932ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:24:55.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4337" for this suite.
Dec 22 14:25:01.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:25:01.564: INFO: namespace proxy-4337 deletion completed in 6.182129003s

• [SLOW TEST:29.502 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:25:01.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:25:01.656: INFO: Creating deployment "nginx-deployment"
Dec 22 14:25:01.661: INFO: Waiting for observed generation 1
Dec 22 14:25:04.297: INFO: Waiting for all required pods to come up
Dec 22 14:25:05.209: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 22 14:25:33.866: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 22 14:25:33.875: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 22 14:25:33.888: INFO: Updating deployment nginx-deployment
Dec 22 14:25:33.888: INFO: Waiting for observed generation 2
Dec 22 14:25:43.806: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 22 14:25:45.805: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 22 14:25:46.092: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 22 14:25:48.587: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 22 14:25:48.587: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 22 14:25:49.330: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 22 14:25:50.210: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 22 14:25:50.210: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 22 14:25:50.223: INFO: Updating deployment nginx-deployment
Dec 22 14:25:50.223: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 22 14:25:50.915: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 22 14:25:51.804: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 22 14:26:03.784: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6117,SelfLink:/apis/apps/v1/namespaces/deployment-6117/deployments/nginx-deployment,UID:55b57ddd-4c4d-482e-b3f0-6ee9a862dcec,ResourceVersion:17649971,Generation:3,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-22 14:25:50 +0000 UTC 2019-12-22 14:25:50 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-22 14:25:57 +0000 UTC 2019-12-22 14:25:01 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 22 14:26:08.166: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6117,SelfLink:/apis/apps/v1/namespaces/deployment-6117/replicasets/nginx-deployment-55fb7cb77f,UID:94fde7a4-3ec3-4213-b7a9-10dad7758d2c,ResourceVersion:17649966,Generation:3,CreationTimestamp:2019-12-22 14:25:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 55b57ddd-4c4d-482e-b3f0-6ee9a862dcec 0xc00288cf37 0xc00288cf38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 14:26:08.166: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 22 14:26:08.166: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6117,SelfLink:/apis/apps/v1/namespaces/deployment-6117/replicasets/nginx-deployment-7b8c6f4498,UID:7c4c3aae-0773-4e29-9655-4418c7050e64,ResourceVersion:17649985,Generation:3,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 55b57ddd-4c4d-482e-b3f0-6ee9a862dcec 0xc00288d017 0xc00288d018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 22 14:26:10.880: INFO: Pod "nginx-deployment-55fb7cb77f-5qzf9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5qzf9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-5qzf9,UID:4f4513d6-1542-40af-90ac-3ee417d98e16,ResourceVersion:17649867,Generation:0,CreationTimestamp:2019-12-22 14:25:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6e567 0xc000e6e568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6e5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6e5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-22 14:25:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.880: INFO: Pod "nginx-deployment-55fb7cb77f-9m685" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9m685,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-9m685,UID:1c74fbac-18de-485b-a7b3-02e78d13a90a,ResourceVersion:17649952,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6e6d7 0xc000e6e6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6e740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6e760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.881: INFO: Pod "nginx-deployment-55fb7cb77f-9xf22" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9xf22,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-9xf22,UID:8beee4b6-65b3-4fd1-a40e-e5118c54ca05,ResourceVersion:17649963,Generation:0,CreationTimestamp:2019-12-22 14:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6e7e7 0xc000e6e7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6e860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6e880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.881: INFO: Pod "nginx-deployment-55fb7cb77f-bksg4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bksg4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-bksg4,UID:5ed41c7d-32ab-4c7c-bd3c-48bf5f397e8f,ResourceVersion:17649948,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6e907 0xc000e6e908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6e980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6e9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.881: INFO: Pod "nginx-deployment-55fb7cb77f-dkj5s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dkj5s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-dkj5s,UID:a75dc33d-9469-4001-ac1e-be37bfbb4e4c,ResourceVersion:17649889,Generation:0,CreationTimestamp:2019-12-22 14:25:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6ea27 0xc000e6ea28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6eaa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6eac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 14:25:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.882: INFO: Pod "nginx-deployment-55fb7cb77f-gkp2p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gkp2p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-gkp2p,UID:561e3f10-d0d4-49f7-b149-47fefe3fd149,ResourceVersion:17649931,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6eb97 0xc000e6eb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6ec00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6ec20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.882: INFO: Pod "nginx-deployment-55fb7cb77f-lf5vr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lf5vr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-lf5vr,UID:be4f2d31-8de0-4db4-9703-46efd08566b9,ResourceVersion:17649898,Generation:0,CreationTimestamp:2019-12-22 14:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6eca7 0xc000e6eca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6ed20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6ed40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-22 14:25:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.882: INFO: Pod "nginx-deployment-55fb7cb77f-n7dsk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n7dsk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-n7dsk,UID:738d52a4-c2d4-4e49-bb71-2d4f80ebe196,ResourceVersion:17649929,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6ee27 0xc000e6ee28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6eea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6eec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.882: INFO: Pod "nginx-deployment-55fb7cb77f-nwtqd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nwtqd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-nwtqd,UID:4d77f2bc-5526-4305-b5a5-41e9a775e7a9,ResourceVersion:17649980,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6ef47 0xc000e6ef48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6efb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6efd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-22 14:25:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.883: INFO: Pod "nginx-deployment-55fb7cb77f-qfnpj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qfnpj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-qfnpj,UID:06d19fe2-070a-4671-9550-939068c65626,ResourceVersion:17649895,Generation:0,CreationTimestamp:2019-12-22 14:25:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6f0c7 0xc000e6f0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 14:25:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.883: INFO: Pod "nginx-deployment-55fb7cb77f-vcnmn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vcnmn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-vcnmn,UID:a700fb81-a2a1-4093-adc7-aeb334c2e323,ResourceVersion:17649955,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6f257 0xc000e6f258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f2d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.883: INFO: Pod "nginx-deployment-55fb7cb77f-xnsfj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xnsfj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-xnsfj,UID:db87c677-caa0-4279-923f-d16af2800b40,ResourceVersion:17649904,Generation:0,CreationTimestamp:2019-12-22 14:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6f387 0xc000e6f388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 14:25:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.883: INFO: Pod "nginx-deployment-55fb7cb77f-zwdhf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zwdhf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-55fb7cb77f-zwdhf,UID:f06e9334-1772-4af0-9058-b48f94e6f143,ResourceVersion:17649961,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 94fde7a4-3ec3-4213-b7a9-10dad7758d2c 0xc000e6f527 0xc000e6f528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f5a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.884: INFO: Pod "nginx-deployment-7b8c6f4498-2lnth" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2lnth,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-2lnth,UID:f3d4782f-204b-4def-acc2-0f72ccf7e21f,ResourceVersion:17649964,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6f647 0xc000e6f648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-22 14:25:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.884: INFO: Pod "nginx-deployment-7b8c6f4498-6pdsd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6pdsd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-6pdsd,UID:ee35b52f-a472-48a6-89b0-c64d1a0730f2,ResourceVersion:17649992,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6f7e7 0xc000e6f7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 14:25:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.884: INFO: Pod "nginx-deployment-7b8c6f4498-6plv6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6plv6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-6plv6,UID:35acbb2c-d2ba-4528-a948-769c6e5af5c7,ResourceVersion:17649954,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6f947 0xc000e6f948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6f9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6f9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.885: INFO: Pod "nginx-deployment-7b8c6f4498-7jncn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7jncn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-7jncn,UID:bf2737a6-b27e-4565-8cc7-59ab3430116f,ResourceVersion:17649793,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6fa67 0xc000e6fa68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6fad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6faf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-22 14:25:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://33d0ea0a2fad68bd63fa12f59af6be7ca9b687dc32c9821cfadc8d0b0a94f2f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.885: INFO: Pod "nginx-deployment-7b8c6f4498-cl2vc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cl2vc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-cl2vc,UID:a12bc328-422e-46fd-a32a-32e7eaf490ff,ResourceVersion:17649825,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6fbc7 0xc000e6fbc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6fc40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6fc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-22 14:25:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f185887789bb12452729bc50d536903b940b42967a32bd1e374b83d0380c0430}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.885: INFO: Pod "nginx-deployment-7b8c6f4498-dkw9m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dkw9m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-dkw9m,UID:ecc40eed-a1f8-4265-9999-180d0695106a,ResourceVersion:17649951,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6fd37 0xc000e6fd38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6fdb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6fdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.886: INFO: Pod "nginx-deployment-7b8c6f4498-dt448" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dt448,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-dt448,UID:fc1a0a75-fd67-4a68-b3e2-261c62c1d3b0,ResourceVersion:17649930,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6fe57 0xc000e6fe58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6fec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e6fee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.886: INFO: Pod "nginx-deployment-7b8c6f4498-f4jq7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f4jq7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-f4jq7,UID:1cfd4549-d0f9-4056-8f8e-7ea90b4c83c5,ResourceVersion:17649819,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc000e6ff67 0xc000e6ff68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e6fff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-22 14:25:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://397f56df06bb89dba71be8428aec26329d240f5b5e167ce0d1abed5056cbee0d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.886: INFO: Pod "nginx-deployment-7b8c6f4498-gnqvn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gnqvn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-gnqvn,UID:e135f519-0f97-4767-be38-c97c0b181a82,ResourceVersion:17649976,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc0e7 0xc0027dc0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 14:25:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.886: INFO: Pod "nginx-deployment-7b8c6f4498-k8q7j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k8q7j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-k8q7j,UID:566123f1-eef0-49a5-9706-b9e787bb73a4,ResourceVersion:17649962,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc247 0xc0027dc248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc2b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.887: INFO: Pod "nginx-deployment-7b8c6f4498-kv82k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kv82k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-kv82k,UID:7d4e7625-a719-4bc9-bc66-40c5cea3b8eb,ResourceVersion:17649804,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc357 0xc0027dc358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc3c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-22 14:25:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://08f191e56fd5301510d48e73a8dd3170a67aeb8979e6b48aff9721c9ff0121cc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.887: INFO: Pod "nginx-deployment-7b8c6f4498-mwpqz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mwpqz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-mwpqz,UID:26462a95-960c-4005-a757-cf2876bc9c44,ResourceVersion:17649798,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc4b7 0xc0027dc4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-22 14:25:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0188d56e16776f291cece2d8430cc2ae4a37b67065be78412e25ab5622cca941}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.887: INFO: Pod "nginx-deployment-7b8c6f4498-n8cxc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n8cxc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-n8cxc,UID:2c589312-9241-4ecd-a412-80fb3ed26a4a,ResourceVersion:17649947,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc617 0xc0027dc618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.887: INFO: Pod "nginx-deployment-7b8c6f4498-ncdgk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ncdgk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-ncdgk,UID:603f13da-2f34-4ce7-8140-1139b9e91152,ResourceVersion:17649828,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc737 0xc0027dc738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc7b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-22 14:25:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9186c0dd8d3ab69894a9adeb295a7bb9b6e035437c993daa81d2c6271deebd87}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.888: INFO: Pod "nginx-deployment-7b8c6f4498-r7d5g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r7d5g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-r7d5g,UID:decadadf-a7c3-4942-a149-a702ee4b25e0,ResourceVersion:17649932,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc8b7 0xc0027dc8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dc930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dc950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.888: INFO: Pod "nginx-deployment-7b8c6f4498-t845j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t845j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-t845j,UID:626973d9-a057-43f8-a7a6-102ce61484fc,ResourceVersion:17649801,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dc9d7 0xc0027dc9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dca40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dca80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-22 14:25:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2b5d8aa5cb6ca891edcf699cb9cbb3fb3a62a5cd75519a37a236301bfd6b010d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.888: INFO: Pod "nginx-deployment-7b8c6f4498-thsnh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-thsnh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-thsnh,UID:a6f79509-a2cb-4217-ae9d-3a1e87031a3d,ResourceVersion:17649822,Generation:0,CreationTimestamp:2019-12-22 14:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dcb57 0xc0027dcb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dcbd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dcc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-22 14:25:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 14:25:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d1847f0c0609f8631e1a30b0df42565830bc633f560b5b164322ef3e0501d772}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.889: INFO: Pod "nginx-deployment-7b8c6f4498-wbpxd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wbpxd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-wbpxd,UID:ccd0df09-5805-40c0-8ad7-588efd9b687f,ResourceVersion:17649933,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dccd7 0xc0027dccd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dcd50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dcd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.889: INFO: Pod "nginx-deployment-7b8c6f4498-ws78m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ws78m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-ws78m,UID:5377f0af-7dbe-45f2-94a8-c96fa93ffcdc,ResourceVersion:17649945,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dcdf7 0xc0027dcdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dce60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dce80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 22 14:26:10.889: INFO: Pod "nginx-deployment-7b8c6f4498-z6mrg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z6mrg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6117,SelfLink:/api/v1/namespaces/deployment-6117/pods/nginx-deployment-7b8c6f4498-z6mrg,UID:20d5ac95-429b-44ad-83a1-d9b6fe9727e4,ResourceVersion:17649949,Generation:0,CreationTimestamp:2019-12-22 14:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7c4c3aae-0773-4e29-9655-4418c7050e64 0xc0027dcf07 0xc0027dcf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bs5qk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bs5qk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bs5qk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027dcf80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027dcfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 14:25:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:26:10.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6117" for this suite.
Dec 22 14:27:35.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:27:35.700: INFO: namespace deployment-6117 deletion completed in 1m22.719627337s

• [SLOW TEST:154.135 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:27:35.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 22 14:27:35.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5361'
Dec 22 14:27:39.798: INFO: stderr: ""
Dec 22 14:27:39.798: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 22 14:27:41.732: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:41.732: INFO: Found 0 / 1
Dec 22 14:27:41.984: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:41.984: INFO: Found 0 / 1
Dec 22 14:27:42.806: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:42.806: INFO: Found 0 / 1
Dec 22 14:27:43.862: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:43.862: INFO: Found 0 / 1
Dec 22 14:27:44.806: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:44.806: INFO: Found 0 / 1
Dec 22 14:27:45.815: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:45.815: INFO: Found 0 / 1
Dec 22 14:27:46.805: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:46.805: INFO: Found 0 / 1
Dec 22 14:27:47.811: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:47.811: INFO: Found 0 / 1
Dec 22 14:27:48.805: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:48.805: INFO: Found 0 / 1
Dec 22 14:27:49.808: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:49.808: INFO: Found 0 / 1
Dec 22 14:27:50.811: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:50.811: INFO: Found 0 / 1
Dec 22 14:27:51.809: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:51.809: INFO: Found 0 / 1
Dec 22 14:27:52.807: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:52.807: INFO: Found 0 / 1
Dec 22 14:27:53.857: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:53.857: INFO: Found 0 / 1
Dec 22 14:27:54.807: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:54.807: INFO: Found 0 / 1
Dec 22 14:27:55.903: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:55.903: INFO: Found 1 / 1
Dec 22 14:27:55.903: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 22 14:27:55.907: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 14:27:55.907: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 22 14:27:55.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k9fjm redis-master --namespace=kubectl-5361'
Dec 22 14:27:56.151: INFO: stderr: ""
Dec 22 14:27:56.151: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Dec 14:27:54.535 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Dec 14:27:54.543 # Server started, Redis version 3.2.12\n1:M 22 Dec 14:27:54.544 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Dec 14:27:54.544 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 22 14:27:56.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k9fjm redis-master --namespace=kubectl-5361 --tail=1'
Dec 22 14:27:56.269: INFO: stderr: ""
Dec 22 14:27:56.269: INFO: stdout: "1:M 22 Dec 14:27:54.544 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 22 14:27:56.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k9fjm redis-master --namespace=kubectl-5361 --limit-bytes=1'
Dec 22 14:27:56.896: INFO: stderr: ""
Dec 22 14:27:56.896: INFO: stdout: " "
STEP: exposing timestamps
Dec 22 14:27:56.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k9fjm redis-master --namespace=kubectl-5361 --tail=1 --timestamps'
Dec 22 14:27:57.268: INFO: stderr: ""
Dec 22 14:27:57.268: INFO: stdout: "2019-12-22T14:27:54.544877114Z 1:M 22 Dec 14:27:54.544 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 22 14:27:59.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k9fjm redis-master --namespace=kubectl-5361 --since=1s'
Dec 22 14:28:00.009: INFO: stderr: ""
Dec 22 14:28:00.009: INFO: stdout: ""
Dec 22 14:28:00.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k9fjm redis-master --namespace=kubectl-5361 --since=24h'
Dec 22 14:28:00.806: INFO: stderr: ""
Dec 22 14:28:00.806: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Dec 14:27:54.535 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Dec 14:27:54.543 # Server started, Redis version 3.2.12\n1:M 22 Dec 14:27:54.544 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Dec 14:27:54.544 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 22 14:28:00.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5361'
Dec 22 14:28:00.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:28:00.950: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 22 14:28:00.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5361'
Dec 22 14:28:01.070: INFO: stderr: "No resources found.\n"
Dec 22 14:28:01.070: INFO: stdout: ""
Dec 22 14:28:01.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5361 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 22 14:28:01.205: INFO: stderr: ""
Dec 22 14:28:01.205: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:28:01.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5361" for this suite.
Dec 22 14:28:25.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:28:25.333: INFO: namespace kubectl-5361 deletion completed in 24.123523738s

• [SLOW TEST:49.633 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:28:25.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 22 14:28:25.737: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 22 14:28:30.744: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:28:30.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2846" for this suite.
Dec 22 14:28:39.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:28:39.444: INFO: namespace replication-controller-2846 deletion completed in 8.505274047s

• [SLOW TEST:14.111 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:28:39.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:28:39.848: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 22 14:28:40.087: INFO: Number of nodes with available pods: 0
Dec 22 14:28:40.087: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:44.132: INFO: Number of nodes with available pods: 0
Dec 22 14:28:44.132: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:45.114: INFO: Number of nodes with available pods: 0
Dec 22 14:28:45.114: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:46.099: INFO: Number of nodes with available pods: 0
Dec 22 14:28:46.099: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:47.583: INFO: Number of nodes with available pods: 0
Dec 22 14:28:47.583: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:48.098: INFO: Number of nodes with available pods: 0
Dec 22 14:28:48.098: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:49.255: INFO: Number of nodes with available pods: 0
Dec 22 14:28:49.255: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:50.099: INFO: Number of nodes with available pods: 0
Dec 22 14:28:50.099: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:55.506: INFO: Number of nodes with available pods: 0
Dec 22 14:28:55.506: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:57.981: INFO: Number of nodes with available pods: 0
Dec 22 14:28:57.981: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:58.415: INFO: Number of nodes with available pods: 0
Dec 22 14:28:58.415: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:28:59.113: INFO: Number of nodes with available pods: 0
Dec 22 14:28:59.113: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:00.261: INFO: Number of nodes with available pods: 1
Dec 22 14:29:00.261: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:01.122: INFO: Number of nodes with available pods: 1
Dec 22 14:29:01.122: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:02.140: INFO: Number of nodes with available pods: 1
Dec 22 14:29:02.140: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:03.104: INFO: Number of nodes with available pods: 1
Dec 22 14:29:03.104: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:04.141: INFO: Number of nodes with available pods: 1
Dec 22 14:29:04.141: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:05.104: INFO: Number of nodes with available pods: 2
Dec 22 14:29:05.104: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 22 14:29:05.169: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:05.169: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:07.478: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:07.478: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:08.341: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:08.341: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:09.334: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:09.335: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:11.163: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:11.163: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:11.742: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:11.742: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:12.339: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:12.339: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:13.368: INFO: Wrong image for pod: daemon-set-6m44f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:13.368: INFO: Pod daemon-set-6m44f is not available
Dec 22 14:29:13.368: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:14.347: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:14.347: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:16.036: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:16.036: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:16.727: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:16.727: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:18.748: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:18.748: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:19.379: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:19.379: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:20.338: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:20.338: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:21.487: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:21.487: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:23.752: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:23.752: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:25.745: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:25.745: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:26.340: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:26.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:27.630: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:27.630: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:28.340: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:28.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:29.380: INFO: Pod daemon-set-2lkmv is not available
Dec 22 14:29:29.380: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:30.339: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:31.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:32.338: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:33.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:34.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:35.343: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:36.342: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:37.372: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:37.372: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:38.342: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:38.342: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:39.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:39.340: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:40.341: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:40.341: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:41.340: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:41.340: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:42.339: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:42.339: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:43.341: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:43.341: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:44.338: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:44.338: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:45.344: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:45.344: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:46.341: INFO: Wrong image for pod: daemon-set-wvss9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 22 14:29:46.341: INFO: Pod daemon-set-wvss9 is not available
Dec 22 14:29:47.336: INFO: Pod daemon-set-wzbv8 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 22 14:29:47.345: INFO: Number of nodes with available pods: 1
Dec 22 14:29:47.345: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:48.360: INFO: Number of nodes with available pods: 1
Dec 22 14:29:48.360: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:49.429: INFO: Number of nodes with available pods: 1
Dec 22 14:29:49.429: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:50.360: INFO: Number of nodes with available pods: 1
Dec 22 14:29:50.360: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:51.370: INFO: Number of nodes with available pods: 1
Dec 22 14:29:51.370: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:52.363: INFO: Number of nodes with available pods: 1
Dec 22 14:29:52.363: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:53.563: INFO: Number of nodes with available pods: 1
Dec 22 14:29:53.563: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:54.360: INFO: Number of nodes with available pods: 1
Dec 22 14:29:54.360: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:55.463: INFO: Number of nodes with available pods: 1
Dec 22 14:29:55.463: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:56.362: INFO: Number of nodes with available pods: 1
Dec 22 14:29:56.362: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:57.653: INFO: Number of nodes with available pods: 1
Dec 22 14:29:57.653: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:58.372: INFO: Number of nodes with available pods: 1
Dec 22 14:29:58.372: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:29:59.361: INFO: Number of nodes with available pods: 1
Dec 22 14:29:59.361: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:30:00.374: INFO: Number of nodes with available pods: 1
Dec 22 14:30:00.374: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:30:01.364: INFO: Number of nodes with available pods: 1
Dec 22 14:30:01.364: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:30:03.561: INFO: Number of nodes with available pods: 1
Dec 22 14:30:03.561: INFO: Node iruya-node is running more than one daemon pod
Dec 22 14:30:04.367: INFO: Number of nodes with available pods: 2
Dec 22 14:30:04.367: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5535, will wait for the garbage collector to delete the pods
Dec 22 14:30:04.450: INFO: Deleting DaemonSet.extensions daemon-set took: 9.761536ms
Dec 22 14:30:04.951: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.467952ms
Dec 22 14:30:16.660: INFO: Number of nodes with available pods: 0
Dec 22 14:30:16.660: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 14:30:16.663: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5535/daemonsets","resourceVersion":"17650607"},"items":null}

Dec 22 14:30:16.666: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5535/pods","resourceVersion":"17650607"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:30:16.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5535" for this suite.
Dec 22 14:30:24.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:30:24.838: INFO: namespace daemonsets-5535 deletion completed in 8.155528089s

• [SLOW TEST:105.392 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:30:24.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:30:30.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9285" for this suite.
Dec 22 14:30:36.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:30:36.845: INFO: namespace watch-9285 deletion completed in 6.282005479s

• [SLOW TEST:12.006 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:30:36.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-882c47e0-dbd0-47df-847b-e8179a332dc3
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:30:57.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9528" for this suite.
Dec 22 14:31:21.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:31:21.789: INFO: namespace configmap-9528 deletion completed in 24.153880029s

• [SLOW TEST:44.944 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:31:21.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 22 14:31:22.048: INFO: Waiting up to 5m0s for pod "pod-5924f491-624b-4db6-914e-f04e2acaae57" in namespace "emptydir-8239" to be "success or failure"
Dec 22 14:31:22.054: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169783ms
Dec 22 14:31:24.066: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017906249s
Dec 22 14:31:26.152: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10377685s
Dec 22 14:31:28.158: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11040662s
Dec 22 14:31:30.170: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121970147s
Dec 22 14:31:32.176: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12811938s
Dec 22 14:31:34.195: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 12.147179461s
Dec 22 14:31:36.203: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Pending", Reason="", readiness=false. Elapsed: 14.154629078s
Dec 22 14:31:38.230: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.182196212s
STEP: Saw pod success
Dec 22 14:31:38.230: INFO: Pod "pod-5924f491-624b-4db6-914e-f04e2acaae57" satisfied condition "success or failure"
Dec 22 14:31:38.233: INFO: Trying to get logs from node iruya-node pod pod-5924f491-624b-4db6-914e-f04e2acaae57 container test-container: 
STEP: delete the pod
Dec 22 14:31:38.386: INFO: Waiting for pod pod-5924f491-624b-4db6-914e-f04e2acaae57 to disappear
Dec 22 14:31:38.393: INFO: Pod pod-5924f491-624b-4db6-914e-f04e2acaae57 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:31:38.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8239" for this suite.
Dec 22 14:31:44.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:31:44.580: INFO: namespace emptydir-8239 deletion completed in 6.174767738s

• [SLOW TEST:22.790 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:31:44.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 22 14:31:59.485: INFO: Successfully updated pod "annotationupdate187ac940-c6cf-4601-94bb-ac5eadfeb02c"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:32:01.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6786" for this suite.
Dec 22 14:32:41.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:32:42.077: INFO: namespace projected-6786 deletion completed in 40.49679572s

• [SLOW TEST:57.497 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:32:42.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 22 14:32:42.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 22 14:32:42.596: INFO: stderr: ""
Dec 22 14:32:42.596: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:32:42.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4114" for this suite.
Dec 22 14:32:48.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:32:48.793: INFO: namespace kubectl-4114 deletion completed in 6.177028449s

• [SLOW TEST:6.715 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:32:48.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-26bc71d0-55a8-40d6-8a74-786b0648d343
STEP: Creating a pod to test consume configMaps
Dec 22 14:32:49.070: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f" in namespace "projected-6601" to be "success or failure"
Dec 22 14:32:49.091: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.842269ms
Dec 22 14:32:51.103: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033331365s
Dec 22 14:32:53.110: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039796831s
Dec 22 14:32:55.117: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046905394s
Dec 22 14:32:57.129: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059690306s
Dec 22 14:32:59.138: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068433628s
Dec 22 14:33:01.670: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.600408987s
Dec 22 14:33:03.685: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.614893251s
STEP: Saw pod success
Dec 22 14:33:03.685: INFO: Pod "pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f" satisfied condition "success or failure"
Dec 22 14:33:03.689: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 14:33:03.829: INFO: Waiting for pod pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f to disappear
Dec 22 14:33:03.846: INFO: Pod pod-projected-configmaps-e31b2f72-38cf-4e9b-a2da-fe53dfda2c7f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:33:03.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6601" for this suite.
Dec 22 14:33:09.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:33:10.118: INFO: namespace projected-6601 deletion completed in 6.258221744s

• [SLOW TEST:21.325 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:33:10.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 22 14:33:10.535: INFO: Waiting up to 5m0s for pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769" in namespace "containers-3997" to be "success or failure"
Dec 22 14:33:10.552: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 16.766087ms
Dec 22 14:33:12.566: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030708194s
Dec 22 14:33:14.578: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042878012s
Dec 22 14:33:16.592: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057617656s
Dec 22 14:33:18.608: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073626205s
Dec 22 14:33:20.618: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082971866s
Dec 22 14:33:22.627: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 12.092562418s
Dec 22 14:33:24.636: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Pending", Reason="", readiness=false. Elapsed: 14.101355844s
Dec 22 14:33:26.653: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.118675968s
STEP: Saw pod success
Dec 22 14:33:26.654: INFO: Pod "client-containers-c45091fc-5d72-449d-a8f2-a024f5138769" satisfied condition "success or failure"
Dec 22 14:33:26.662: INFO: Trying to get logs from node iruya-node pod client-containers-c45091fc-5d72-449d-a8f2-a024f5138769 container test-container: 
STEP: delete the pod
Dec 22 14:33:26.894: INFO: Waiting for pod client-containers-c45091fc-5d72-449d-a8f2-a024f5138769 to disappear
Dec 22 14:33:26.918: INFO: Pod client-containers-c45091fc-5d72-449d-a8f2-a024f5138769 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:33:26.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3997" for this suite.
Dec 22 14:33:32.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:33:33.038: INFO: namespace containers-3997 deletion completed in 6.114542089s

• [SLOW TEST:22.920 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:33:33.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 22 14:33:33.226: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:33:33.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2908" for this suite.
Dec 22 14:33:39.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:33:39.498: INFO: namespace kubectl-2908 deletion completed in 6.159485601s

• [SLOW TEST:6.459 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:33:39.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-dr4s
STEP: Creating a pod to test atomic-volume-subpath
Dec 22 14:33:39.902: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dr4s" in namespace "subpath-6494" to be "success or failure"
Dec 22 14:33:39.935: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 32.458821ms
Dec 22 14:33:41.977: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073939589s
Dec 22 14:33:43.994: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09150173s
Dec 22 14:33:46.003: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100395873s
Dec 22 14:33:48.008: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105692066s
Dec 22 14:33:50.017: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114379858s
Dec 22 14:33:52.030: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.127483383s
Dec 22 14:33:54.036: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.133027191s
Dec 22 14:33:56.074: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 16.171014765s
Dec 22 14:33:58.119: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 18.216018405s
Dec 22 14:34:00.130: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 20.227127711s
Dec 22 14:34:02.139: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 22.23628904s
Dec 22 14:34:04.146: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 24.243685833s
Dec 22 14:34:06.153: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 26.250526941s
Dec 22 14:34:08.160: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 28.256977314s
Dec 22 14:34:10.167: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 30.264123603s
Dec 22 14:34:12.171: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 32.268698226s
Dec 22 14:34:14.175: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Running", Reason="", readiness=true. Elapsed: 34.272730206s
Dec 22 14:34:16.195: INFO: Pod "pod-subpath-test-projected-dr4s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.292247312s
STEP: Saw pod success
Dec 22 14:34:16.195: INFO: Pod "pod-subpath-test-projected-dr4s" satisfied condition "success or failure"
Dec 22 14:34:16.199: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-dr4s container test-container-subpath-projected-dr4s: 
STEP: delete the pod
Dec 22 14:34:16.401: INFO: Waiting for pod pod-subpath-test-projected-dr4s to disappear
Dec 22 14:34:16.491: INFO: Pod pod-subpath-test-projected-dr4s no longer exists
STEP: Deleting pod pod-subpath-test-projected-dr4s
Dec 22 14:34:16.491: INFO: Deleting pod "pod-subpath-test-projected-dr4s" in namespace "subpath-6494"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:34:16.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6494" for this suite.
Dec 22 14:34:22.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:34:22.775: INFO: namespace subpath-6494 deletion completed in 6.192395051s

• [SLOW TEST:43.278 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:34:22.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-tl7g
STEP: Creating a pod to test atomic-volume-subpath
Dec 22 14:34:23.163: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tl7g" in namespace "subpath-9757" to be "success or failure"
Dec 22 14:34:23.284: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 120.28404ms
Dec 22 14:34:25.291: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127533087s
Dec 22 14:34:28.015: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851257752s
Dec 22 14:34:30.022: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.858374529s
Dec 22 14:34:32.028: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.86433055s
Dec 22 14:34:34.038: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.874586892s
Dec 22 14:34:36.048: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.884244047s
Dec 22 14:34:38.090: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 14.926346989s
Dec 22 14:34:40.099: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 16.935403279s
Dec 22 14:34:42.133: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 18.969330823s
Dec 22 14:34:44.145: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 20.981604772s
Dec 22 14:34:46.154: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 22.990595507s
Dec 22 14:34:48.160: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 24.996761513s
Dec 22 14:34:50.168: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 27.00458671s
Dec 22 14:34:52.179: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 29.015735013s
Dec 22 14:34:54.184: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 31.020871252s
Dec 22 14:34:56.192: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 33.028964871s
Dec 22 14:34:59.571: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Running", Reason="", readiness=true. Elapsed: 36.40776452s
Dec 22 14:35:01.585: INFO: Pod "pod-subpath-test-configmap-tl7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.421752401s
STEP: Saw pod success
Dec 22 14:35:01.585: INFO: Pod "pod-subpath-test-configmap-tl7g" satisfied condition "success or failure"
Dec 22 14:35:01.606: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-tl7g container test-container-subpath-configmap-tl7g: 
STEP: delete the pod
Dec 22 14:35:02.078: INFO: Waiting for pod pod-subpath-test-configmap-tl7g to disappear
Dec 22 14:35:02.099: INFO: Pod pod-subpath-test-configmap-tl7g no longer exists
STEP: Deleting pod pod-subpath-test-configmap-tl7g
Dec 22 14:35:02.099: INFO: Deleting pod "pod-subpath-test-configmap-tl7g" in namespace "subpath-9757"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:35:02.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9757" for this suite.
Dec 22 14:35:08.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:35:08.303: INFO: namespace subpath-9757 deletion completed in 6.19421438s

• [SLOW TEST:45.528 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:35:08.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 in namespace container-probe-4749
Dec 22 14:35:24.627: INFO: Started pod liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 in namespace container-probe-4749
STEP: checking the pod's current state and verifying that restartCount is present
Dec 22 14:35:24.642: INFO: Initial restart count of pod liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 is 0
Dec 22 14:35:44.767: INFO: Restart count of pod container-probe-4749/liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 is now 1 (20.12439522s elapsed)
Dec 22 14:36:04.298: INFO: Restart count of pod container-probe-4749/liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 is now 2 (39.655608064s elapsed)
Dec 22 14:36:24.477: INFO: Restart count of pod container-probe-4749/liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 is now 3 (59.835114441s elapsed)
Dec 22 14:36:44.763: INFO: Restart count of pod container-probe-4749/liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 is now 4 (1m20.120518567s elapsed)
Dec 22 14:37:02.883: INFO: Restart count of pod container-probe-4749/liveness-c9ab5969-108d-4db7-8d44-02b1f52828a5 is now 5 (1m38.24028291s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:37:02.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4749" for this suite.
Dec 22 14:37:09.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:37:09.115: INFO: namespace container-probe-4749 deletion completed in 6.124674685s

• [SLOW TEST:120.812 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:37:09.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 22 14:37:35.443: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:35.565: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:37.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:37.572: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:39.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:39.578: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:41.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:41.577: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:43.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:43.573: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:45.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:45.582: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:47.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:47.575: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:49.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:49.573: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:51.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:51.572: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:53.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:53.626: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:55.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:55.959: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:57.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:57.571: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:37:59.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:37:59.573: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 22 14:38:01.565: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 22 14:38:01.572: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:38:01.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7904" for this suite.
Dec 22 14:38:41.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:38:41.896: INFO: namespace container-lifecycle-hook-7904 deletion completed in 40.295400253s

• [SLOW TEST:92.779 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:38:41.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-52e52f05-b6e8-4861-9bed-4a356bbfd0b1
STEP: Creating a pod to test consume secrets
Dec 22 14:38:42.139: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba" in namespace "projected-8994" to be "success or failure"
Dec 22 14:38:42.458: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 319.312627ms
Dec 22 14:38:44.479: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340128481s
Dec 22 14:38:46.490: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35097242s
Dec 22 14:38:48.503: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.364374858s
Dec 22 14:38:50.518: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379489503s
Dec 22 14:38:52.580: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.441152652s
Dec 22 14:38:54.591: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 12.452337678s
Dec 22 14:38:56.604: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464714707s
Dec 22 14:39:01.454: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 19.315018061s
Dec 22 14:39:03.461: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.321955342s
STEP: Saw pod success
Dec 22 14:39:03.461: INFO: Pod "pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba" satisfied condition "success or failure"
Dec 22 14:39:03.466: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 14:39:03.725: INFO: Waiting for pod pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba to disappear
Dec 22 14:39:03.763: INFO: Pod pod-projected-secrets-9f8a7be3-03e7-4c06-8320-5db32186d7ba no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:39:03.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8994" for this suite.
Dec 22 14:39:10.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:39:10.713: INFO: namespace projected-8994 deletion completed in 6.932605138s

• [SLOW TEST:28.817 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:39:10.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 22 14:39:10.815: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:39:34.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5211" for this suite.
Dec 22 14:39:40.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:39:40.479: INFO: namespace init-container-5211 deletion completed in 6.216525537s

• [SLOW TEST:29.765 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:39:40.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 22 14:39:40.654: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 22 14:39:40.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6494'
Dec 22 14:39:43.598: INFO: stderr: ""
Dec 22 14:39:43.598: INFO: stdout: "service/redis-slave created\n"
Dec 22 14:39:43.599: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 22 14:39:43.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6494'
Dec 22 14:39:44.618: INFO: stderr: ""
Dec 22 14:39:44.618: INFO: stdout: "service/redis-master created\n"
Dec 22 14:39:44.619: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 22 14:39:44.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6494'
Dec 22 14:39:44.974: INFO: stderr: ""
Dec 22 14:39:44.974: INFO: stdout: "service/frontend created\n"
Dec 22 14:39:44.975: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 22 14:39:44.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6494'
Dec 22 14:39:45.567: INFO: stderr: ""
Dec 22 14:39:45.567: INFO: stdout: "deployment.apps/frontend created\n"
Dec 22 14:39:45.567: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 22 14:39:45.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6494'
Dec 22 14:39:46.231: INFO: stderr: ""
Dec 22 14:39:46.231: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 22 14:39:46.232: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 22 14:39:46.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6494'
Dec 22 14:39:48.226: INFO: stderr: ""
Dec 22 14:39:48.226: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 22 14:39:48.226: INFO: Waiting for all frontend pods to be Running.
Dec 22 14:40:28.278: INFO: Waiting for frontend to serve content.
Dec 22 14:40:31.791: INFO: Trying to add a new entry to the guestbook.
Dec 22 14:40:31.950: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 22 14:40:31.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6494'
Dec 22 14:40:32.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:40:32.393: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 14:40:32.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6494'
Dec 22 14:40:32.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:40:32.821: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 14:40:32.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6494'
Dec 22 14:40:33.240: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:40:33.240: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 14:40:33.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6494'
Dec 22 14:40:33.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:40:33.388: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 14:40:33.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6494'
Dec 22 14:40:33.552: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:40:33.552: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 14:40:33.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6494'
Dec 22 14:40:34.101: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:40:34.101: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:40:34.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6494" for this suite.
Dec 22 14:41:20.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:41:20.457: INFO: namespace kubectl-6494 deletion completed in 46.2546442s

• [SLOW TEST:99.978 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:41:20.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-c70b28b5-e80e-4f4e-9d4f-f646438b6b1d
STEP: Creating configMap with name cm-test-opt-upd-f3a9918b-d8bc-4665-832f-dd250f8a8064
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c70b28b5-e80e-4f4e-9d4f-f646438b6b1d
STEP: Updating configmap cm-test-opt-upd-f3a9918b-d8bc-4665-832f-dd250f8a8064
STEP: Creating configMap with name cm-test-opt-create-f499cea9-74da-4411-8f20-30c53cc2f942
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:41:47.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8464" for this suite.
Dec 22 14:42:11.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:42:11.686: INFO: namespace configmap-8464 deletion completed in 24.20999405s

• [SLOW TEST:51.229 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:42:11.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-1272
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1272
STEP: Deleting pre-stop pod
Dec 22 14:42:47.021: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:42:47.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1272" for this suite.
Dec 22 14:43:27.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:43:27.375: INFO: namespace prestop-1272 deletion completed in 40.237325866s

• [SLOW TEST:75.689 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:43:27.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:43:27.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:43:44.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4091" for this suite.
Dec 22 14:44:48.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:44:48.874: INFO: namespace pods-4091 deletion completed in 1m4.466628584s

• [SLOW TEST:81.498 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:44:48.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 22 14:44:49.400: INFO: Waiting up to 5m0s for pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507" in namespace "emptydir-3711" to be "success or failure"
Dec 22 14:44:49.431: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 31.754949ms
Dec 22 14:44:51.437: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036871828s
Dec 22 14:44:53.446: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046308863s
Dec 22 14:44:55.450: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050164448s
Dec 22 14:44:57.457: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057561641s
Dec 22 14:44:59.464: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064164578s
Dec 22 14:45:01.473: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Pending", Reason="", readiness=false. Elapsed: 12.073102564s
Dec 22 14:45:03.484: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.084408066s
STEP: Saw pod success
Dec 22 14:45:03.484: INFO: Pod "pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507" satisfied condition "success or failure"
Dec 22 14:45:03.489: INFO: Trying to get logs from node iruya-node pod pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507 container test-container: 
STEP: delete the pod
Dec 22 14:45:03.619: INFO: Waiting for pod pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507 to disappear
Dec 22 14:45:03.646: INFO: Pod pod-aacfade2-dbdc-4bfd-a6c1-f50180c05507 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:45:03.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3711" for this suite.
Dec 22 14:45:09.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:45:09.923: INFO: namespace emptydir-3711 deletion completed in 6.264877449s

• [SLOW TEST:21.047 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:45:09.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 22 14:45:10.333: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 22 14:45:10.354: INFO: Waiting for terminating namespaces to be deleted...
Dec 22 14:45:10.358: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 22 14:45:10.379: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.379: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 14:45:10.379: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 22 14:45:10.379: INFO: 	Container weave ready: true, restart count 0
Dec 22 14:45:10.379: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 14:45:10.379: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 22 14:45:10.394: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 22 14:45:10.394: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container coredns ready: true, restart count 0
Dec 22 14:45:10.394: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container coredns ready: true, restart count 0
Dec 22 14:45:10.394: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container etcd ready: true, restart count 0
Dec 22 14:45:10.394: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container weave ready: true, restart count 0
Dec 22 14:45:10.394: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 14:45:10.394: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 22 14:45:10.394: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 14:45:10.394: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 22 14:45:10.394: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e2b8b5c86e920f], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:45:11.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9316" for this suite.
Dec 22 14:45:17.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:45:17.872: INFO: namespace sched-pred-9316 deletion completed in 6.42711188s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.949 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:45:17.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7274
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7274
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7274
Dec 22 14:45:18.171: INFO: Found 0 stateful pods, waiting for 1
Dec 22 14:45:28.182: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 14:45:38.184: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 22 14:45:38.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 14:45:38.961: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 14:45:38.961: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 14:45:38.961: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 14:45:38.973: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 22 14:45:50.353: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 14:45:50.353: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 14:45:50.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999067s
Dec 22 14:45:51.607: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.819777482s
Dec 22 14:45:52.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.804719057s
Dec 22 14:45:53.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.780115632s
Dec 22 14:45:54.657: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.762761288s
Dec 22 14:45:55.666: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.755334342s
Dec 22 14:45:56.673: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.746025523s
Dec 22 14:45:57.681: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.739236165s
Dec 22 14:45:58.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.731648459s
Dec 22 14:45:59.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 723.552903ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7274
Dec 22 14:46:00.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 14:46:02.300: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 14:46:02.300: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 14:46:02.300: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 14:46:02.313: INFO: Found 1 stateful pods, waiting for 3
Dec 22 14:46:13.474: INFO: Found 2 stateful pods, waiting for 3
Dec 22 14:46:22.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:46:22.357: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:46:22.357: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 14:46:32.383: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:46:32.383: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:46:32.383: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 14:46:42.339: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:46:42.339: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 14:46:42.339: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 22 14:46:42.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 14:46:42.978: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 14:46:42.978: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 14:46:42.978: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 14:46:42.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 14:46:43.662: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 14:46:43.662: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 14:46:43.662: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 14:46:43.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 14:46:44.657: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 14:46:44.657: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 14:46:44.657: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 14:46:44.657: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 14:46:44.662: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 22 14:46:54.674: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 14:46:54.674: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 14:46:54.674: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 14:46:54.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999403s
Dec 22 14:46:55.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968711096s
Dec 22 14:46:56.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.957266367s
Dec 22 14:46:57.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932514869s
Dec 22 14:46:58.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.910253429s
Dec 22 14:46:59.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.902063766s
Dec 22 14:47:00.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.892103711s
Dec 22 14:47:01.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.883523416s
Dec 22 14:47:02.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.864259848s
Dec 22 14:47:03.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 853.47704ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7274
Dec 22 14:47:04.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 14:47:05.457: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 14:47:05.457: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 14:47:05.457: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 14:47:05.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 14:47:05.994: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 14:47:05.994: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 14:47:05.994: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 14:47:05.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7274 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 14:47:06.443: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 14:47:06.443: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 14:47:06.443: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 14:47:06.443: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 22 14:47:56.476: INFO: Deleting all statefulset in ns statefulset-7274
Dec 22 14:47:56.484: INFO: Scaling statefulset ss to 0
Dec 22 14:47:56.501: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 14:47:56.505: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:47:56.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7274" for this suite.
Dec 22 14:48:04.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:48:04.772: INFO: namespace statefulset-7274 deletion completed in 8.222640556s

• [SLOW TEST:166.899 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:48:04.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4b7c7671-55c5-4fbb-b5e3-4adf9d3e0c98
STEP: Creating a pod to test consume configMaps
Dec 22 14:48:05.035: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a" in namespace "configmap-3720" to be "success or failure"
Dec 22 14:48:05.116: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 80.851272ms
Dec 22 14:48:07.124: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088186781s
Dec 22 14:48:09.139: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103204406s
Dec 22 14:48:11.145: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10900464s
Dec 22 14:48:13.152: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116783309s
Dec 22 14:48:15.160: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1241898s
Dec 22 14:48:17.167: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131377613s
Dec 22 14:48:19.176: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.140650961s
Dec 22 14:48:21.186: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.150866067s
STEP: Saw pod success
Dec 22 14:48:21.186: INFO: Pod "pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a" satisfied condition "success or failure"
Dec 22 14:48:21.190: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a container configmap-volume-test: 
STEP: delete the pod
Dec 22 14:48:21.352: INFO: Waiting for pod pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a to disappear
Dec 22 14:48:21.360: INFO: Pod pod-configmaps-1e73f90c-9a88-4863-8d87-caeb8f8aa08a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:48:21.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3720" for this suite.
Dec 22 14:48:27.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:48:27.992: INFO: namespace configmap-3720 deletion completed in 6.626766918s

• [SLOW TEST:23.220 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:48:27.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:48:34.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5793" for this suite.
Dec 22 14:48:40.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:48:41.056: INFO: namespace namespaces-5793 deletion completed in 6.101096755s
STEP: Destroying namespace "nsdeletetest-5488" for this suite.
Dec 22 14:48:41.058: INFO: Namespace nsdeletetest-5488 was already deleted
STEP: Destroying namespace "nsdeletetest-1264" for this suite.
Dec 22 14:48:47.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:48:47.198: INFO: namespace nsdeletetest-1264 deletion completed in 6.140221522s

• [SLOW TEST:19.205 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:48:47.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-9a17e338-c7de-4c12-a474-f89610c8e5ce
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:48:47.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5542" for this suite.
Dec 22 14:48:53.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:48:53.578: INFO: namespace secrets-5542 deletion completed in 6.216458512s

• [SLOW TEST:6.379 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:48:53.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 22 14:48:54.920: INFO: Waiting up to 5m0s for pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2" in namespace "downward-api-1590" to be "success or failure"
Dec 22 14:48:54.980: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 60.670272ms
Dec 22 14:48:57.021: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101617402s
Dec 22 14:48:59.030: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110294049s
Dec 22 14:49:01.036: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116241477s
Dec 22 14:49:03.041: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121312363s
Dec 22 14:49:05.064: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.144342001s
Dec 22 14:49:07.071: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.151437189s
Dec 22 14:49:09.082: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.16252199s
Dec 22 14:49:11.089: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.169048991s
STEP: Saw pod success
Dec 22 14:49:11.089: INFO: Pod "downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2" satisfied condition "success or failure"
Dec 22 14:49:11.092: INFO: Trying to get logs from node iruya-node pod downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2 container dapi-container: 
STEP: delete the pod
Dec 22 14:49:11.267: INFO: Waiting for pod downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2 to disappear
Dec 22 14:49:11.418: INFO: Pod downward-api-6346a4d5-5c23-4bf2-b5c8-d02e2cfabfc2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:49:11.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1590" for this suite.
Dec 22 14:49:17.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:49:17.711: INFO: namespace downward-api-1590 deletion completed in 6.277083021s

• [SLOW TEST:24.133 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:49:17.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 14:49:17.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f" in namespace "downward-api-625" to be "success or failure"
Dec 22 14:49:18.083: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 162.63866ms
Dec 22 14:49:20.092: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172094192s
Dec 22 14:49:22.101: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181378633s
Dec 22 14:49:24.110: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190158829s
Dec 22 14:49:26.120: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199629986s
Dec 22 14:49:28.137: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217294014s
Dec 22 14:49:30.168: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.248208216s
Dec 22 14:49:32.657: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.73671307s
Dec 22 14:49:34.664: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.744215628s
Dec 22 14:49:36.692: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.771878935s
STEP: Saw pod success
Dec 22 14:49:36.692: INFO: Pod "downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f" satisfied condition "success or failure"
Dec 22 14:49:36.701: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f container client-container: 
STEP: delete the pod
Dec 22 14:49:36.816: INFO: Waiting for pod downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f to disappear
Dec 22 14:49:36.861: INFO: Pod downwardapi-volume-8832e1f9-ba47-4c68-ae99-cdba57d7c83f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:49:36.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-625" for this suite.
Dec 22 14:49:42.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:49:43.048: INFO: namespace downward-api-625 deletion completed in 6.131991492s

• [SLOW TEST:25.336 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:49:43.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 22 14:49:43.411: INFO: Waiting up to 5m0s for pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805" in namespace "emptydir-118" to be "success or failure"
Dec 22 14:49:43.501: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 90.256769ms
Dec 22 14:49:45.509: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0979648s
Dec 22 14:49:47.563: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151448244s
Dec 22 14:49:49.568: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156914197s
Dec 22 14:49:51.576: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165273165s
Dec 22 14:49:53.587: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 10.175412323s
Dec 22 14:49:55.593: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 12.18182958s
Dec 22 14:49:57.599: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Pending", Reason="", readiness=false. Elapsed: 14.188310837s
Dec 22 14:49:59.681: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.270261455s
STEP: Saw pod success
Dec 22 14:49:59.682: INFO: Pod "pod-ea0c7587-5ae7-463b-8872-59fac5546805" satisfied condition "success or failure"
Dec 22 14:49:59.687: INFO: Trying to get logs from node iruya-node pod pod-ea0c7587-5ae7-463b-8872-59fac5546805 container test-container: 
STEP: delete the pod
Dec 22 14:49:59.746: INFO: Waiting for pod pod-ea0c7587-5ae7-463b-8872-59fac5546805 to disappear
Dec 22 14:49:59.881: INFO: Pod pod-ea0c7587-5ae7-463b-8872-59fac5546805 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:49:59.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-118" for this suite.
Dec 22 14:50:05.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:50:06.094: INFO: namespace emptydir-118 deletion completed in 6.204165333s

• [SLOW TEST:23.046 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:50:06.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 14:50:06.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3114'
Dec 22 14:50:08.905: INFO: stderr: ""
Dec 22 14:50:08.905: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 22 14:50:23.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3114 -o json'
Dec 22 14:50:24.094: INFO: stderr: ""
Dec 22 14:50:24.094: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-22T14:50:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-3114\",\n        \"resourceVersion\": \"17653345\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3114/pods/e2e-test-nginx-pod\",\n        \"uid\": \"135c14ac-3f6e-4d2c-9fc0-999b1009ee68\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-b7jvn\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-b7jvn\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-b7jvn\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T14:50:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T14:50:22Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T14:50:22Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T14:50:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f789f6da142719c41cf4b94251cc8e12f145dab1ab610dd66a6b74c42882b538\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-22T14:50:21Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-22T14:50:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 22 14:50:24.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3114'
Dec 22 14:50:24.748: INFO: stderr: ""
Dec 22 14:50:24.748: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 22 14:50:24.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3114'
Dec 22 14:50:37.939: INFO: stderr: ""
Dec 22 14:50:37.939: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:50:37.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3114" for this suite.
Dec 22 14:50:44.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:50:44.194: INFO: namespace kubectl-3114 deletion completed in 6.20399523s

• [SLOW TEST:38.100 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:50:44.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 22 14:50:44.385: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3402,SelfLink:/api/v1/namespaces/watch-3402/configmaps/e2e-watch-test-watch-closed,UID:73400fb4-d179-44ec-8902-bfd99d9598fb,ResourceVersion:17653398,Generation:0,CreationTimestamp:2019-12-22 14:50:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 14:50:44.385: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3402,SelfLink:/api/v1/namespaces/watch-3402/configmaps/e2e-watch-test-watch-closed,UID:73400fb4-d179-44ec-8902-bfd99d9598fb,ResourceVersion:17653399,Generation:0,CreationTimestamp:2019-12-22 14:50:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 22 14:50:44.420: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3402,SelfLink:/api/v1/namespaces/watch-3402/configmaps/e2e-watch-test-watch-closed,UID:73400fb4-d179-44ec-8902-bfd99d9598fb,ResourceVersion:17653400,Generation:0,CreationTimestamp:2019-12-22 14:50:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 14:50:44.420: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3402,SelfLink:/api/v1/namespaces/watch-3402/configmaps/e2e-watch-test-watch-closed,UID:73400fb4-d179-44ec-8902-bfd99d9598fb,ResourceVersion:17653401,Generation:0,CreationTimestamp:2019-12-22 14:50:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:50:44.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3402" for this suite.
Dec 22 14:50:50.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:50:50.698: INFO: namespace watch-3402 deletion completed in 6.172881019s

• [SLOW TEST:6.504 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:50:50.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 22 14:51:07.892: INFO: 10 pods remaining
Dec 22 14:51:07.893: INFO: 10 pods has nil DeletionTimestamp
Dec 22 14:51:07.893: INFO: 
Dec 22 14:51:08.893: INFO: 9 pods remaining
Dec 22 14:51:08.893: INFO: 0 pods has nil DeletionTimestamp
Dec 22 14:51:08.893: INFO: 
STEP: Gathering metrics
W1222 14:51:09.939037       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 14:51:09.939: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:51:09.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4529" for this suite.
Dec 22 14:51:32.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:51:32.248: INFO: namespace gc-4529 deletion completed in 22.302985419s

• [SLOW TEST:41.550 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:51:32.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-mmg9
STEP: Creating a pod to test atomic-volume-subpath
Dec 22 14:51:32.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mmg9" in namespace "subpath-6509" to be "success or failure"
Dec 22 14:51:32.686: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 132.484745ms
Dec 22 14:51:34.699: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145547118s
Dec 22 14:51:36.708: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154782429s
Dec 22 14:51:38.727: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173894245s
Dec 22 14:51:40.733: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179965457s
Dec 22 14:51:42.748: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194993542s
Dec 22 14:51:44.758: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.204444052s
Dec 22 14:51:46.765: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.211943994s
Dec 22 14:51:48.774: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 16.220469024s
Dec 22 14:51:50.780: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 18.227136685s
Dec 22 14:51:52.791: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 20.237665731s
Dec 22 14:51:54.802: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 22.248780705s
Dec 22 14:51:56.814: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 24.261183513s
Dec 22 14:51:58.826: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 26.272705537s
Dec 22 14:52:00.833: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 28.280037183s
Dec 22 14:52:02.842: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 30.288980689s
Dec 22 14:52:04.852: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 32.29890448s
Dec 22 14:52:06.863: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Running", Reason="", readiness=true. Elapsed: 34.309838701s
Dec 22 14:52:08.886: INFO: Pod "pod-subpath-test-secret-mmg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.332630988s
STEP: Saw pod success
Dec 22 14:52:08.886: INFO: Pod "pod-subpath-test-secret-mmg9" satisfied condition "success or failure"
Dec 22 14:52:08.895: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-mmg9 container test-container-subpath-secret-mmg9: 
STEP: delete the pod
Dec 22 14:52:09.154: INFO: Waiting for pod pod-subpath-test-secret-mmg9 to disappear
Dec 22 14:52:09.209: INFO: Pod pod-subpath-test-secret-mmg9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-mmg9
Dec 22 14:52:09.209: INFO: Deleting pod "pod-subpath-test-secret-mmg9" in namespace "subpath-6509"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:52:09.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6509" for this suite.
Dec 22 14:52:15.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:52:15.493: INFO: namespace subpath-6509 deletion completed in 6.274177614s

• [SLOW TEST:43.244 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:52:15.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6998
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6998
STEP: Creating statefulset with conflicting port in namespace statefulset-6998
STEP: Waiting until pod test-pod will start running in namespace statefulset-6998
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6998
Dec 22 14:52:36.214: INFO: Observed stateful pod in namespace: statefulset-6998, name: ss-0, uid: 7afc330d-dc03-422a-8d1c-667722132062, status phase: Pending. Waiting for statefulset controller to delete.
Dec 22 14:52:36.495: INFO: Observed stateful pod in namespace: statefulset-6998, name: ss-0, uid: 7afc330d-dc03-422a-8d1c-667722132062, status phase: Failed. Waiting for statefulset controller to delete.
Dec 22 14:52:36.520: INFO: Observed stateful pod in namespace: statefulset-6998, name: ss-0, uid: 7afc330d-dc03-422a-8d1c-667722132062, status phase: Failed. Waiting for statefulset controller to delete.
Dec 22 14:52:36.538: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6998
STEP: Removing pod with conflicting port in namespace statefulset-6998
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6998 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 22 14:52:58.898: INFO: Deleting all statefulset in ns statefulset-6998
Dec 22 14:52:58.903: INFO: Scaling statefulset ss to 0
Dec 22 14:53:08.947: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 14:53:08.951: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:53:08.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6998" for this suite.
Dec 22 14:53:17.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:53:17.341: INFO: namespace statefulset-6998 deletion completed in 8.366699745s

• [SLOW TEST:61.848 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:53:17.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 22 14:53:17.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6816'
Dec 22 14:53:18.098: INFO: stderr: ""
Dec 22 14:53:18.098: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 14:53:18.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6816'
Dec 22 14:53:18.274: INFO: stderr: ""
Dec 22 14:53:18.274: INFO: stdout: "update-demo-nautilus-b5vvv update-demo-nautilus-x4s6g "
Dec 22 14:53:18.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:18.471: INFO: stderr: ""
Dec 22 14:53:18.471: INFO: stdout: ""
Dec 22 14:53:18.471: INFO: update-demo-nautilus-b5vvv is created but not running
Dec 22 14:53:23.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6816'
Dec 22 14:53:23.784: INFO: stderr: ""
Dec 22 14:53:23.784: INFO: stdout: "update-demo-nautilus-b5vvv update-demo-nautilus-x4s6g "
Dec 22 14:53:23.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:24.065: INFO: stderr: ""
Dec 22 14:53:24.065: INFO: stdout: ""
Dec 22 14:53:24.065: INFO: update-demo-nautilus-b5vvv is created but not running
Dec 22 14:53:29.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6816'
Dec 22 14:53:30.398: INFO: stderr: ""
Dec 22 14:53:30.399: INFO: stdout: "update-demo-nautilus-b5vvv update-demo-nautilus-x4s6g "
Dec 22 14:53:30.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:30.923: INFO: stderr: ""
Dec 22 14:53:30.923: INFO: stdout: ""
Dec 22 14:53:30.923: INFO: update-demo-nautilus-b5vvv is created but not running
Dec 22 14:53:35.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6816'
Dec 22 14:53:36.135: INFO: stderr: ""
Dec 22 14:53:36.135: INFO: stdout: "update-demo-nautilus-b5vvv update-demo-nautilus-x4s6g "
Dec 22 14:53:36.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:36.339: INFO: stderr: ""
Dec 22 14:53:36.339: INFO: stdout: ""
Dec 22 14:53:36.339: INFO: update-demo-nautilus-b5vvv is created but not running
Dec 22 14:53:41.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6816'
Dec 22 14:53:41.470: INFO: stderr: ""
Dec 22 14:53:41.470: INFO: stdout: "update-demo-nautilus-b5vvv update-demo-nautilus-x4s6g "
Dec 22 14:53:41.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:41.547: INFO: stderr: ""
Dec 22 14:53:41.547: INFO: stdout: "true"
Dec 22 14:53:41.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vvv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:41.636: INFO: stderr: ""
Dec 22 14:53:41.636: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 14:53:41.637: INFO: validating pod update-demo-nautilus-b5vvv
Dec 22 14:53:41.715: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 14:53:41.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 14:53:41.715: INFO: update-demo-nautilus-b5vvv is verified up and running
Dec 22 14:53:41.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4s6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:41.872: INFO: stderr: ""
Dec 22 14:53:41.872: INFO: stdout: "true"
Dec 22 14:53:41.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4s6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6816'
Dec 22 14:53:41.951: INFO: stderr: ""
Dec 22 14:53:41.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 14:53:41.951: INFO: validating pod update-demo-nautilus-x4s6g
Dec 22 14:53:41.981: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 14:53:41.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 14:53:41.981: INFO: update-demo-nautilus-x4s6g is verified up and running
STEP: using delete to clean up resources
Dec 22 14:53:41.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6816'
Dec 22 14:53:42.080: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 14:53:42.080: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 22 14:53:42.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6816'
Dec 22 14:53:42.247: INFO: stderr: "No resources found.\n"
Dec 22 14:53:42.247: INFO: stdout: ""
Dec 22 14:53:42.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6816 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 22 14:53:42.523: INFO: stderr: ""
Dec 22 14:53:42.524: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:53:42.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6816" for this suite.
Dec 22 14:54:06.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:54:06.757: INFO: namespace kubectl-6816 deletion completed in 24.156668584s

• [SLOW TEST:49.415 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:54:06.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Dec 22 14:54:22.970: INFO: Pod pod-hostip-0f07ec71-14e0-4672-96f0-b8d9a319bd95 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:54:22.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4183" for this suite.
Dec 22 14:55:03.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:55:03.093: INFO: namespace pods-4183 deletion completed in 40.118340417s

• [SLOW TEST:56.336 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:55:03.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 22 14:55:17.988: INFO: Successfully updated pod "annotationupdatee828c8da-4bb4-4216-8bf2-8ad24805c8a5"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:55:20.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8603" for this suite.
Dec 22 14:56:00.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:56:00.347: INFO: namespace downward-api-8603 deletion completed in 40.179858237s

• [SLOW TEST:57.253 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:56:00.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6538/configmap-test-8437f4cd-a59d-446d-b18e-2c2f187d3fc9
STEP: Creating a pod to test consume configMaps
Dec 22 14:56:00.633: INFO: Waiting up to 5m0s for pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55" in namespace "configmap-6538" to be "success or failure"
Dec 22 14:56:00.646: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 13.297427ms
Dec 22 14:56:02.653: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020031992s
Dec 22 14:56:04.665: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032641999s
Dec 22 14:56:06.682: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048812168s
Dec 22 14:56:08.689: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056233351s
Dec 22 14:56:10.697: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063935455s
Dec 22 14:56:12.710: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077230087s
Dec 22 14:56:14.714: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081471726s
Dec 22 14:56:17.099: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Pending", Reason="", readiness=false. Elapsed: 16.465811261s
Dec 22 14:56:19.106: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.473779668s
STEP: Saw pod success
Dec 22 14:56:19.107: INFO: Pod "pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55" satisfied condition "success or failure"
Dec 22 14:56:19.113: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55 container env-test: 
STEP: delete the pod
Dec 22 14:56:19.193: INFO: Waiting for pod pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55 to disappear
Dec 22 14:56:19.289: INFO: Pod pod-configmaps-1708c757-fdce-4811-942f-0ef307430d55 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:56:19.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6538" for this suite.
Dec 22 14:56:25.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:56:25.490: INFO: namespace configmap-6538 deletion completed in 6.188870331s

• [SLOW TEST:25.142 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:56:25.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 14:56:25.862: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"927ed2b9-9e2f-4ce1-b965-b52c88481522", Controller:(*bool)(0xc0030a5812), BlockOwnerDeletion:(*bool)(0xc0030a5813)}}
Dec 22 14:56:25.886: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5f765f47-1663-4d99-aaa0-9e513798d3f9", Controller:(*bool)(0xc002ff0e3a), BlockOwnerDeletion:(*bool)(0xc002ff0e3b)}}
Dec 22 14:56:25.904: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2420acc1-9acd-4278-a675-b0ff781ed35b", Controller:(*bool)(0xc0030a59fa), BlockOwnerDeletion:(*bool)(0xc0030a59fb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 14:56:31.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5161" for this suite.
Dec 22 14:56:37.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 14:56:37.212: INFO: namespace gc-5161 deletion completed in 6.179972773s

• [SLOW TEST:11.722 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 14:56:37.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-82fc115d-195d-46f6-91df-1dd6ffcf1f3b in namespace container-probe-138
Dec 22 14:56:55.761: INFO: Started pod test-webserver-82fc115d-195d-46f6-91df-1dd6ffcf1f3b in namespace container-probe-138
STEP: checking the pod's current state and verifying that restartCount is present
Dec 22 14:56:55.765: INFO: Initial restart count of pod test-webserver-82fc115d-195d-46f6-91df-1dd6ffcf1f3b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:00:57.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-138" for this suite.
Dec 22 15:01:03.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:01:03.673: INFO: namespace container-probe-138 deletion completed in 6.274231133s

• [SLOW TEST:266.461 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:01:03.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-a8f67a3b-65ea-4c97-b1de-bba274cafcec
STEP: Creating a pod to test consume secrets
Dec 22 15:01:03.814: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282" in namespace "projected-1591" to be "success or failure"
Dec 22 15:01:03.819: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71042ms
Dec 22 15:01:05.837: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022351326s
Dec 22 15:01:07.842: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027577716s
Dec 22 15:01:09.852: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037268316s
Dec 22 15:01:11.867: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053171066s
Dec 22 15:01:13.884: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069359163s
STEP: Saw pod success
Dec 22 15:01:13.884: INFO: Pod "pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282" satisfied condition "success or failure"
Dec 22 15:01:13.891: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 15:01:13.970: INFO: Waiting for pod pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282 to disappear
Dec 22 15:01:13.979: INFO: Pod pod-projected-secrets-0730edd8-5b96-45d5-a4cf-b2cd8095a282 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:01:13.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1591" for this suite.
Dec 22 15:01:20.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:01:20.148: INFO: namespace projected-1591 deletion completed in 6.161331955s

• [SLOW TEST:16.474 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:01:20.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 15:01:23.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1980'
Dec 22 15:01:25.439: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 15:01:25.439: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 22 15:01:25.459: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-sdbqx]
Dec 22 15:01:25.459: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-sdbqx" in namespace "kubectl-1980" to be "running and ready"
Dec 22 15:01:25.461: INFO: Pod "e2e-test-nginx-rc-sdbqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041837ms
Dec 22 15:01:27.470: INFO: Pod "e2e-test-nginx-rc-sdbqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010471791s
Dec 22 15:01:29.483: INFO: Pod "e2e-test-nginx-rc-sdbqx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023340899s
Dec 22 15:01:31.489: INFO: Pod "e2e-test-nginx-rc-sdbqx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030115501s
Dec 22 15:01:33.495: INFO: Pod "e2e-test-nginx-rc-sdbqx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035817417s
Dec 22 15:01:35.501: INFO: Pod "e2e-test-nginx-rc-sdbqx": Phase="Running", Reason="", readiness=true. Elapsed: 10.041890099s
Dec 22 15:01:35.501: INFO: Pod "e2e-test-nginx-rc-sdbqx" satisfied condition "running and ready"
Dec 22 15:01:35.501: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-sdbqx]
Dec 22 15:01:35.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1980'
Dec 22 15:01:35.680: INFO: stderr: ""
Dec 22 15:01:35.680: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 22 15:01:35.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1980'
Dec 22 15:01:35.787: INFO: stderr: ""
Dec 22 15:01:35.787: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:01:35.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1980" for this suite.
Dec 22 15:01:57.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:01:57.967: INFO: namespace kubectl-1980 deletion completed in 22.173718925s

• [SLOW TEST:37.819 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:01:57.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 15:01:58.033: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 22 15:01:58.067: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 22 15:02:03.086: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 22 15:02:07.097: INFO: Creating deployment "test-rolling-update-deployment"
Dec 22 15:02:07.104: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 22 15:02:07.119: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 22 15:02:09.138: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 22 15:02:09.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:11.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:13.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:15.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623727, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:17.149: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 22 15:02:17.165: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1846,SelfLink:/apis/apps/v1/namespaces/deployment-1846/deployments/test-rolling-update-deployment,UID:7f9b8d52-505c-444b-8963-a0857a2fd502,ResourceVersion:17654902,Generation:1,CreationTimestamp:2019-12-22 15:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-22 15:02:07 +0000 UTC 2019-12-22 15:02:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-22 15:02:15 +0000 UTC 2019-12-22 15:02:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 22 15:02:17.172: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1846,SelfLink:/apis/apps/v1/namespaces/deployment-1846/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:ac6e8968-2df3-4cd4-b34d-9c7cabdeb181,ResourceVersion:17654891,Generation:1,CreationTimestamp:2019-12-22 15:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7f9b8d52-505c-444b-8963-a0857a2fd502 0xc002e50de7 0xc002e50de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 22 15:02:17.172: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 22 15:02:17.172: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1846,SelfLink:/apis/apps/v1/namespaces/deployment-1846/replicasets/test-rolling-update-controller,UID:b7febea3-f46a-4c56-a595-1cdd24dc19f8,ResourceVersion:17654900,Generation:2,CreationTimestamp:2019-12-22 15:01:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7f9b8d52-505c-444b-8963-a0857a2fd502 0xc002e50cf7 0xc002e50cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 15:02:17.179: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-xx84g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-xx84g,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1846,SelfLink:/api/v1/namespaces/deployment-1846/pods/test-rolling-update-deployment-79f6b9d75c-xx84g,UID:2be079dc-9eb1-41b3-b25e-ef813bad7c3a,ResourceVersion:17654890,Generation:0,CreationTimestamp:2019-12-22 15:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c ac6e8968-2df3-4cd4-b34d-9c7cabdeb181 0xc00212f1d7 0xc00212f1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tzbtc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tzbtc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-tzbtc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00212f290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00212f2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 15:02:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 15:02:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 15:02:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 15:02:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-22 15:02:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-22 15:02:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://49dc9f502a77555cb4c111ae1b6fef91e31c34caec97a5d8b80b8b9910418e28}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:02:17.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1846" for this suite.
Dec 22 15:02:23.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:02:23.321: INFO: namespace deployment-1846 deletion completed in 6.134517986s

• [SLOW TEST:25.353 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:02:23.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 22 15:02:23.460: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 22 15:02:24.180: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 22 15:02:26.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:28.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:30.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:32.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:34.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712623744, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 15:02:42.248: INFO: Waited 5.626200269s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:02:43.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7264" for this suite.
Dec 22 15:02:51.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:02:51.629: INFO: namespace aggregator-7264 deletion completed in 8.210088444s

• [SLOW TEST:28.308 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:02:51.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 15:02:51.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25" in namespace "downward-api-9002" to be "success or failure"
Dec 22 15:02:51.780: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.002905ms
Dec 22 15:02:53.799: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026908263s
Dec 22 15:02:55.811: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039059061s
Dec 22 15:02:57.825: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052433914s
Dec 22 15:02:59.832: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060141626s
Dec 22 15:03:01.840: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068014376s
STEP: Saw pod success
Dec 22 15:03:01.840: INFO: Pod "downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25" satisfied condition "success or failure"
Dec 22 15:03:01.845: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25 container client-container: 
STEP: delete the pod
Dec 22 15:03:01.926: INFO: Waiting for pod downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25 to disappear
Dec 22 15:03:01.941: INFO: Pod downwardapi-volume-f700596d-85cf-42ec-8b9b-ca25b6832c25 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:03:01.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9002" for this suite.
Dec 22 15:03:08.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:03:08.126: INFO: namespace downward-api-9002 deletion completed in 6.177422181s

• [SLOW TEST:16.497 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:03:08.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 22 15:03:08.277: INFO: Waiting up to 5m0s for pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a" in namespace "emptydir-6601" to be "success or failure"
Dec 22 15:03:08.315: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.489721ms
Dec 22 15:03:10.328: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050194234s
Dec 22 15:03:12.333: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055241527s
Dec 22 15:03:14.340: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06301821s
Dec 22 15:03:16.346: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068200449s
Dec 22 15:03:18.357: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079853391s
STEP: Saw pod success
Dec 22 15:03:18.357: INFO: Pod "pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a" satisfied condition "success or failure"
Dec 22 15:03:18.361: INFO: Trying to get logs from node iruya-node pod pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a container test-container: 
STEP: delete the pod
Dec 22 15:03:18.415: INFO: Waiting for pod pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a to disappear
Dec 22 15:03:18.438: INFO: Pod pod-ea8df8fd-bbb5-4eec-bfe3-b26be524903a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:03:18.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6601" for this suite.
Dec 22 15:03:24.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:03:24.693: INFO: namespace emptydir-6601 deletion completed in 6.247186322s

• [SLOW TEST:16.567 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:03:24.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:04:24.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-137" for this suite.
Dec 22 15:04:46.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:04:46.973: INFO: namespace container-probe-137 deletion completed in 22.156134154s

• [SLOW TEST:82.280 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:04:46.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5755
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5755 to expose endpoints map[]
Dec 22 15:04:47.099: INFO: Get endpoints failed (3.711952ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 22 15:04:48.125: INFO: successfully validated that service endpoint-test2 in namespace services-5755 exposes endpoints map[] (1.030261232s elapsed)
STEP: Creating pod pod1 in namespace services-5755
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5755 to expose endpoints map[pod1:[80]]
Dec 22 15:04:52.236: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.086132429s elapsed, will retry)
Dec 22 15:04:57.319: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.169649778s elapsed, will retry)
Dec 22 15:04:58.346: INFO: successfully validated that service endpoint-test2 in namespace services-5755 exposes endpoints map[pod1:[80]] (10.196629228s elapsed)
STEP: Creating pod pod2 in namespace services-5755
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5755 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 22 15:05:03.541: INFO: Unexpected endpoints: found map[4ef53b98-82a4-4689-a761-8cfa144991ec:[80]], expected map[pod1:[80] pod2:[80]] (5.18351972s elapsed, will retry)
Dec 22 15:05:07.628: INFO: successfully validated that service endpoint-test2 in namespace services-5755 exposes endpoints map[pod1:[80] pod2:[80]] (9.270688362s elapsed)
STEP: Deleting pod pod1 in namespace services-5755
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5755 to expose endpoints map[pod2:[80]]
Dec 22 15:05:07.682: INFO: successfully validated that service endpoint-test2 in namespace services-5755 exposes endpoints map[pod2:[80]] (22.881552ms elapsed)
STEP: Deleting pod pod2 in namespace services-5755
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5755 to expose endpoints map[]
Dec 22 15:05:08.828: INFO: successfully validated that service endpoint-test2 in namespace services-5755 exposes endpoints map[] (1.133715716s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:05:09.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5755" for this suite.
Dec 22 15:05:32.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:05:32.689: INFO: namespace services-5755 deletion completed in 23.279687231s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:45.715 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:05:32.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-244e9f97-d9a8-454b-8e11-3207e1762296
STEP: Creating a pod to test consume configMaps
Dec 22 15:05:32.777: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9" in namespace "projected-4097" to be "success or failure"
Dec 22 15:05:32.788: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.636599ms
Dec 22 15:05:34.803: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026173543s
Dec 22 15:05:36.810: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033356278s
Dec 22 15:05:38.828: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050499437s
Dec 22 15:05:40.839: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9": Phase="Running", Reason="", readiness=true. Elapsed: 8.061995083s
Dec 22 15:05:42.878: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100564897s
STEP: Saw pod success
Dec 22 15:05:42.878: INFO: Pod "pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9" satisfied condition "success or failure"
Dec 22 15:05:42.885: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 15:05:43.258: INFO: Waiting for pod pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9 to disappear
Dec 22 15:05:43.285: INFO: Pod pod-projected-configmaps-291f89f7-dd83-4382-b7e4-31741bc242d9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:05:43.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4097" for this suite.
Dec 22 15:05:49.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:05:49.442: INFO: namespace projected-4097 deletion completed in 6.148688839s

• [SLOW TEST:16.753 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:05:49.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6883
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 22 15:05:49.654: INFO: Found 0 stateful pods, waiting for 3
Dec 22 15:06:00.284: INFO: Found 2 stateful pods, waiting for 3
Dec 22 15:06:09.667: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 15:06:09.667: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 15:06:09.667: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 15:06:19.672: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 15:06:19.672: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 15:06:19.672: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 15:06:19.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6883 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 15:06:20.119: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 15:06:20.120: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 15:06:20.120: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 22 15:06:30.169: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 22 15:06:40.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6883 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 15:06:40.804: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 15:06:40.805: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 15:06:40.805: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 15:06:50.858: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:06:50.858: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 15:06:50.858: INFO: Waiting for Pod statefulset-6883/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 15:07:00.888: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:07:00.888: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 15:07:00.889: INFO: Waiting for Pod statefulset-6883/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 15:07:10.884: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:07:10.884: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 15:07:20.876: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:07:20.876: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 15:07:30.881: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 22 15:07:40.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6883 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 15:07:41.322: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 22 15:07:41.322: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 15:07:41.322: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 15:07:51.396: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 22 15:08:01.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6883 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 15:08:01.833: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 22 15:08:01.834: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 15:08:01.834: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 15:08:11.881: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:08:11.881: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:11.881: INFO: Waiting for Pod statefulset-6883/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:11.881: INFO: Waiting for Pod statefulset-6883/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:21.903: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:08:21.903: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:21.903: INFO: Waiting for Pod statefulset-6883/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:31.900: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:08:31.900: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:31.900: INFO: Waiting for Pod statefulset-6883/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:41.895: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:08:41.895: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:08:51.895: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
Dec 22 15:08:51.895: INFO: Waiting for Pod statefulset-6883/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 22 15:09:01.897: INFO: Waiting for StatefulSet statefulset-6883/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 22 15:09:11.901: INFO: Deleting all statefulset in ns statefulset-6883
Dec 22 15:09:11.905: INFO: Scaling statefulset ss2 to 0
Dec 22 15:09:41.966: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 15:09:41.975: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:09:42.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6883" for this suite.
Dec 22 15:09:50.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:09:50.248: INFO: namespace statefulset-6883 deletion completed in 8.217996521s

• [SLOW TEST:240.805 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:09:50.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 22 15:09:50.426: INFO: Number of nodes with available pods: 0
Dec 22 15:09:50.426: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:52.361: INFO: Number of nodes with available pods: 0
Dec 22 15:09:52.361: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:53.945: INFO: Number of nodes with available pods: 0
Dec 22 15:09:53.945: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:54.456: INFO: Number of nodes with available pods: 0
Dec 22 15:09:54.456: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:55.486: INFO: Number of nodes with available pods: 0
Dec 22 15:09:55.486: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:56.438: INFO: Number of nodes with available pods: 0
Dec 22 15:09:56.438: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:57.497: INFO: Number of nodes with available pods: 0
Dec 22 15:09:57.497: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:59.099: INFO: Number of nodes with available pods: 0
Dec 22 15:09:59.100: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:09:59.546: INFO: Number of nodes with available pods: 0
Dec 22 15:09:59.546: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:00.453: INFO: Number of nodes with available pods: 0
Dec 22 15:10:00.453: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:01.444: INFO: Number of nodes with available pods: 1
Dec 22 15:10:01.444: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:02.445: INFO: Number of nodes with available pods: 2
Dec 22 15:10:02.445: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 22 15:10:02.490: INFO: Number of nodes with available pods: 1
Dec 22 15:10:02.490: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:03.512: INFO: Number of nodes with available pods: 1
Dec 22 15:10:03.512: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:04.518: INFO: Number of nodes with available pods: 1
Dec 22 15:10:04.518: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:05.502: INFO: Number of nodes with available pods: 1
Dec 22 15:10:05.502: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:06.507: INFO: Number of nodes with available pods: 1
Dec 22 15:10:06.507: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:07.509: INFO: Number of nodes with available pods: 1
Dec 22 15:10:07.509: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:08.517: INFO: Number of nodes with available pods: 1
Dec 22 15:10:08.517: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:09.505: INFO: Number of nodes with available pods: 1
Dec 22 15:10:09.505: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:10.516: INFO: Number of nodes with available pods: 1
Dec 22 15:10:10.516: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:11.506: INFO: Number of nodes with available pods: 1
Dec 22 15:10:11.506: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:12.510: INFO: Number of nodes with available pods: 1
Dec 22 15:10:12.510: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:13.503: INFO: Number of nodes with available pods: 1
Dec 22 15:10:13.503: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:14.509: INFO: Number of nodes with available pods: 1
Dec 22 15:10:14.509: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:15.515: INFO: Number of nodes with available pods: 1
Dec 22 15:10:15.515: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:16.512: INFO: Number of nodes with available pods: 1
Dec 22 15:10:16.512: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:17.507: INFO: Number of nodes with available pods: 1
Dec 22 15:10:17.507: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:18.516: INFO: Number of nodes with available pods: 1
Dec 22 15:10:18.516: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:19.610: INFO: Number of nodes with available pods: 1
Dec 22 15:10:19.610: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:20.509: INFO: Number of nodes with available pods: 1
Dec 22 15:10:20.509: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:21.514: INFO: Number of nodes with available pods: 1
Dec 22 15:10:21.514: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:23.297: INFO: Number of nodes with available pods: 1
Dec 22 15:10:23.297: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:23.876: INFO: Number of nodes with available pods: 1
Dec 22 15:10:23.876: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:24.518: INFO: Number of nodes with available pods: 1
Dec 22 15:10:24.518: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:25.508: INFO: Number of nodes with available pods: 1
Dec 22 15:10:25.508: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:10:26.511: INFO: Number of nodes with available pods: 2
Dec 22 15:10:26.511: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9285, will wait for the garbage collector to delete the pods
Dec 22 15:10:26.612: INFO: Deleting DaemonSet.extensions daemon-set took: 35.991577ms
Dec 22 15:10:27.012: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.348189ms
Dec 22 15:10:37.924: INFO: Number of nodes with available pods: 0
Dec 22 15:10:37.924: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 15:10:37.932: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9285/daemonsets","resourceVersion":"17656242"},"items":null}

Dec 22 15:10:37.937: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9285/pods","resourceVersion":"17656242"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:10:37.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9285" for this suite.
Dec 22 15:10:43.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:10:44.101: INFO: namespace daemonsets-9285 deletion completed in 6.146013961s

• [SLOW TEST:53.853 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:10:44.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 15:10:44.216: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 22 15:10:44.235: INFO: Number of nodes with available pods: 0
Dec 22 15:10:44.235: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 22 15:10:44.328: INFO: Number of nodes with available pods: 0
Dec 22 15:10:44.328: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:45.341: INFO: Number of nodes with available pods: 0
Dec 22 15:10:45.341: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:46.340: INFO: Number of nodes with available pods: 0
Dec 22 15:10:46.340: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:47.338: INFO: Number of nodes with available pods: 0
Dec 22 15:10:47.338: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:48.341: INFO: Number of nodes with available pods: 0
Dec 22 15:10:48.341: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:49.340: INFO: Number of nodes with available pods: 0
Dec 22 15:10:49.340: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:50.336: INFO: Number of nodes with available pods: 0
Dec 22 15:10:50.336: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:51.358: INFO: Number of nodes with available pods: 0
Dec 22 15:10:51.359: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:52.334: INFO: Number of nodes with available pods: 0
Dec 22 15:10:52.334: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:53.338: INFO: Number of nodes with available pods: 1
Dec 22 15:10:53.338: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 22 15:10:53.385: INFO: Number of nodes with available pods: 1
Dec 22 15:10:53.385: INFO: Number of running nodes: 0, number of available pods: 1
Dec 22 15:10:54.398: INFO: Number of nodes with available pods: 0
Dec 22 15:10:54.398: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 22 15:10:54.433: INFO: Number of nodes with available pods: 0
Dec 22 15:10:54.433: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:55.439: INFO: Number of nodes with available pods: 0
Dec 22 15:10:55.439: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:56.444: INFO: Number of nodes with available pods: 0
Dec 22 15:10:56.444: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:57.959: INFO: Number of nodes with available pods: 0
Dec 22 15:10:57.960: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:58.444: INFO: Number of nodes with available pods: 0
Dec 22 15:10:58.444: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:10:59.443: INFO: Number of nodes with available pods: 0
Dec 22 15:10:59.443: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:00.440: INFO: Number of nodes with available pods: 0
Dec 22 15:11:00.440: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:01.453: INFO: Number of nodes with available pods: 0
Dec 22 15:11:01.453: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:02.448: INFO: Number of nodes with available pods: 0
Dec 22 15:11:02.448: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:03.443: INFO: Number of nodes with available pods: 0
Dec 22 15:11:03.443: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:04.514: INFO: Number of nodes with available pods: 0
Dec 22 15:11:04.514: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:05.438: INFO: Number of nodes with available pods: 0
Dec 22 15:11:05.438: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:06.445: INFO: Number of nodes with available pods: 0
Dec 22 15:11:06.445: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:07.440: INFO: Number of nodes with available pods: 0
Dec 22 15:11:07.440: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:08.450: INFO: Number of nodes with available pods: 0
Dec 22 15:11:08.450: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:09.446: INFO: Number of nodes with available pods: 0
Dec 22 15:11:09.446: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:11:10.446: INFO: Number of nodes with available pods: 1
Dec 22 15:11:10.446: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1153, will wait for the garbage collector to delete the pods
Dec 22 15:11:10.539: INFO: Deleting DaemonSet.extensions daemon-set took: 11.864916ms
Dec 22 15:11:10.839: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.541554ms
Dec 22 15:11:26.651: INFO: Number of nodes with available pods: 0
Dec 22 15:11:26.652: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 15:11:26.655: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1153/daemonsets","resourceVersion":"17656380"},"items":null}

Dec 22 15:11:26.683: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1153/pods","resourceVersion":"17656380"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:11:26.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1153" for this suite.
Dec 22 15:11:34.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:11:34.987: INFO: namespace daemonsets-1153 deletion completed in 8.22997911s

• [SLOW TEST:50.885 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:11:34.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 15:11:35.045: INFO: Creating ReplicaSet my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa
Dec 22 15:11:35.078: INFO: Pod name my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa: Found 0 pods out of 1
Dec 22 15:11:40.087: INFO: Pod name my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa: Found 1 pods out of 1
Dec 22 15:11:40.087: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa" is running
Dec 22 15:11:44.098: INFO: Pod "my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa-l9n5t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 15:11:35 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 15:11:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 15:11:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 15:11:35 +0000 UTC Reason: Message:}])
Dec 22 15:11:44.098: INFO: Trying to dial the pod
Dec 22 15:11:49.135: INFO: Controller my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa: Got expected result from replica 1 [my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa-l9n5t]: "my-hostname-basic-cd0e419e-6d74-46fe-a646-fc21bceb79aa-l9n5t", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:11:49.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3900" for this suite.
Dec 22 15:11:55.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:11:55.241: INFO: namespace replicaset-3900 deletion completed in 6.098153916s

• [SLOW TEST:20.254 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:11:55.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 22 15:11:55.304: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 22 15:11:55.311: INFO: Waiting for terminating namespaces to be deleted...
Dec 22 15:11:55.313: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 22 15:11:55.330: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.330: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 15:11:55.330: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 22 15:11:55.330: INFO: 	Container weave ready: true, restart count 0
Dec 22 15:11:55.330: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 15:11:55.330: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 22 15:11:55.351: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container weave ready: true, restart count 0
Dec 22 15:11:55.351: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 15:11:55.351: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container coredns ready: true, restart count 0
Dec 22 15:11:55.351: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container etcd ready: true, restart count 0
Dec 22 15:11:55.351: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 15:11:55.351: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 22 15:11:55.351: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 22 15:11:55.351: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container coredns ready: true, restart count 0
Dec 22 15:11:55.351: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 22 15:11:55.351: INFO: 	Container kube-scheduler ready: true, restart count 7
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-38c740b2-5a34-4e4d-b763-723e9a1a68cc 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-38c740b2-5a34-4e4d-b763-723e9a1a68cc off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-38c740b2-5a34-4e4d-b763-723e9a1a68cc
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:12:17.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8975" for this suite.
Dec 22 15:12:47.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:12:48.026: INFO: namespace sched-pred-8975 deletion completed in 30.198698255s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:52.783 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:12:48.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 22 15:12:48.143: INFO: Waiting up to 5m0s for pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e" in namespace "emptydir-5097" to be "success or failure"
Dec 22 15:12:48.147: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068492ms
Dec 22 15:12:50.153: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010520823s
Dec 22 15:12:52.185: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042587446s
Dec 22 15:12:54.201: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05787725s
Dec 22 15:12:56.206: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063244254s
Dec 22 15:12:58.220: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076995434s
STEP: Saw pod success
Dec 22 15:12:58.220: INFO: Pod "pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e" satisfied condition "success or failure"
Dec 22 15:12:58.233: INFO: Trying to get logs from node iruya-node pod pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e container test-container: 
STEP: delete the pod
Dec 22 15:12:58.626: INFO: Waiting for pod pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e to disappear
Dec 22 15:12:58.633: INFO: Pod pod-f76a3f5e-4f4d-40ee-8d97-494682931f0e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:12:58.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5097" for this suite.
Dec 22 15:13:04.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:13:04.763: INFO: namespace emptydir-5097 deletion completed in 6.122749752s

• [SLOW TEST:16.736 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:13:04.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 22 15:13:04.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9787'
Dec 22 15:13:07.832: INFO: stderr: ""
Dec 22 15:13:07.833: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 15:13:07.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:07.983: INFO: stderr: ""
Dec 22 15:13:07.983: INFO: stdout: "update-demo-nautilus-nfzv4 update-demo-nautilus-xlx4q "
Dec 22 15:13:07.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nfzv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:08.207: INFO: stderr: ""
Dec 22 15:13:08.208: INFO: stdout: ""
Dec 22 15:13:08.208: INFO: update-demo-nautilus-nfzv4 is created but not running
Dec 22 15:13:13.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:14.289: INFO: stderr: ""
Dec 22 15:13:14.289: INFO: stdout: "update-demo-nautilus-nfzv4 update-demo-nautilus-xlx4q "
Dec 22 15:13:14.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nfzv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:14.881: INFO: stderr: ""
Dec 22 15:13:14.881: INFO: stdout: ""
Dec 22 15:13:14.881: INFO: update-demo-nautilus-nfzv4 is created but not running
Dec 22 15:13:19.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:20.045: INFO: stderr: ""
Dec 22 15:13:20.045: INFO: stdout: "update-demo-nautilus-nfzv4 update-demo-nautilus-xlx4q "
Dec 22 15:13:20.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nfzv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:20.152: INFO: stderr: ""
Dec 22 15:13:20.152: INFO: stdout: "true"
Dec 22 15:13:20.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nfzv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:20.253: INFO: stderr: ""
Dec 22 15:13:20.253: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 15:13:20.253: INFO: validating pod update-demo-nautilus-nfzv4
Dec 22 15:13:20.272: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 15:13:20.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 15:13:20.272: INFO: update-demo-nautilus-nfzv4 is verified up and running
Dec 22 15:13:20.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xlx4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:20.376: INFO: stderr: ""
Dec 22 15:13:20.376: INFO: stdout: "true"
Dec 22 15:13:20.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xlx4q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:20.520: INFO: stderr: ""
Dec 22 15:13:20.520: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 15:13:20.521: INFO: validating pod update-demo-nautilus-xlx4q
Dec 22 15:13:20.531: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 15:13:20.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 15:13:20.531: INFO: update-demo-nautilus-xlx4q is verified up and running
STEP: scaling down the replication controller
Dec 22 15:13:20.536: INFO: scanned /root for discovery docs: 
Dec 22 15:13:20.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9787'
Dec 22 15:13:21.687: INFO: stderr: ""
Dec 22 15:13:21.687: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 15:13:21.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:21.791: INFO: stderr: ""
Dec 22 15:13:21.791: INFO: stdout: "update-demo-nautilus-nfzv4 update-demo-nautilus-xlx4q "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 22 15:13:26.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:26.937: INFO: stderr: ""
Dec 22 15:13:26.937: INFO: stdout: "update-demo-nautilus-nfzv4 update-demo-nautilus-xlx4q "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 22 15:13:31.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:32.101: INFO: stderr: ""
Dec 22 15:13:32.101: INFO: stdout: "update-demo-nautilus-xlx4q "
Dec 22 15:13:32.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xlx4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:32.282: INFO: stderr: ""
Dec 22 15:13:32.282: INFO: stdout: "true"
Dec 22 15:13:32.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xlx4q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:32.411: INFO: stderr: ""
Dec 22 15:13:32.411: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 15:13:32.411: INFO: validating pod update-demo-nautilus-xlx4q
Dec 22 15:13:32.417: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 15:13:32.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 15:13:32.417: INFO: update-demo-nautilus-xlx4q is verified up and running
STEP: scaling up the replication controller
Dec 22 15:13:32.420: INFO: scanned /root for discovery docs: 
Dec 22 15:13:32.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9787'
Dec 22 15:13:33.708: INFO: stderr: ""
Dec 22 15:13:33.708: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 15:13:33.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:33.846: INFO: stderr: ""
Dec 22 15:13:33.846: INFO: stdout: "update-demo-nautilus-sb46z update-demo-nautilus-xlx4q "
Dec 22 15:13:33.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb46z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:33.949: INFO: stderr: ""
Dec 22 15:13:33.949: INFO: stdout: ""
Dec 22 15:13:33.949: INFO: update-demo-nautilus-sb46z is created but not running
Dec 22 15:13:38.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:39.121: INFO: stderr: ""
Dec 22 15:13:39.121: INFO: stdout: "update-demo-nautilus-sb46z update-demo-nautilus-xlx4q "
Dec 22 15:13:39.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb46z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:39.233: INFO: stderr: ""
Dec 22 15:13:39.234: INFO: stdout: ""
Dec 22 15:13:39.234: INFO: update-demo-nautilus-sb46z is created but not running
Dec 22 15:13:44.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9787'
Dec 22 15:13:44.397: INFO: stderr: ""
Dec 22 15:13:44.397: INFO: stdout: "update-demo-nautilus-sb46z update-demo-nautilus-xlx4q "
Dec 22 15:13:44.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb46z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:44.530: INFO: stderr: ""
Dec 22 15:13:44.530: INFO: stdout: "true"
Dec 22 15:13:44.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb46z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:44.616: INFO: stderr: ""
Dec 22 15:13:44.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 15:13:44.616: INFO: validating pod update-demo-nautilus-sb46z
Dec 22 15:13:44.634: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 15:13:44.634: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 15:13:44.634: INFO: update-demo-nautilus-sb46z is verified up and running
Dec 22 15:13:44.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xlx4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:44.763: INFO: stderr: ""
Dec 22 15:13:44.763: INFO: stdout: "true"
Dec 22 15:13:44.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xlx4q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9787'
Dec 22 15:13:44.890: INFO: stderr: ""
Dec 22 15:13:44.890: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 15:13:44.890: INFO: validating pod update-demo-nautilus-xlx4q
Dec 22 15:13:44.895: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 15:13:44.895: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 15:13:44.895: INFO: update-demo-nautilus-xlx4q is verified up and running
STEP: using delete to clean up resources
Dec 22 15:13:44.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9787'
Dec 22 15:13:45.055: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 15:13:45.055: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 22 15:13:45.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9787'
Dec 22 15:13:45.202: INFO: stderr: "No resources found.\n"
Dec 22 15:13:45.202: INFO: stdout: ""
Dec 22 15:13:45.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9787 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 22 15:13:45.401: INFO: stderr: ""
Dec 22 15:13:45.402: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:13:45.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9787" for this suite.
Dec 22 15:14:09.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:14:09.562: INFO: namespace kubectl-9787 deletion completed in 24.154954563s

• [SLOW TEST:64.797 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:14:09.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 22 15:14:09.663: INFO: Waiting up to 5m0s for pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d" in namespace "var-expansion-5827" to be "success or failure"
Dec 22 15:14:09.682: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.432133ms
Dec 22 15:14:11.690: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027812418s
Dec 22 15:14:13.700: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037623474s
Dec 22 15:14:15.712: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049396573s
Dec 22 15:14:17.719: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d": Phase="Running", Reason="", readiness=true. Elapsed: 8.056583613s
Dec 22 15:14:19.734: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070866829s
STEP: Saw pod success
Dec 22 15:14:19.734: INFO: Pod "var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d" satisfied condition "success or failure"
Dec 22 15:14:19.737: INFO: Trying to get logs from node iruya-node pod var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d container dapi-container: 
STEP: delete the pod
Dec 22 15:14:21.028: INFO: Waiting for pod var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d to disappear
Dec 22 15:14:21.068: INFO: Pod var-expansion-f4a7c500-70ce-46e2-8c74-b4679b324e6d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:14:21.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5827" for this suite.
Dec 22 15:14:27.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:14:27.460: INFO: namespace var-expansion-5827 deletion completed in 6.380811033s

• [SLOW TEST:17.898 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:14:27.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 15:14:27.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-901'
Dec 22 15:14:27.763: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 15:14:27.763: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 22 15:14:29.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-901'
Dec 22 15:14:30.023: INFO: stderr: ""
Dec 22 15:14:30.024: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:14:30.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-901" for this suite.
Dec 22 15:14:36.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:14:36.191: INFO: namespace kubectl-901 deletion completed in 6.157641778s

• [SLOW TEST:8.730 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:14:36.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 22 15:14:36.297: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix687645557/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:14:36.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1886" for this suite.
Dec 22 15:14:42.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:14:42.607: INFO: namespace kubectl-1886 deletion completed in 6.199983488s

• [SLOW TEST:6.416 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:14:42.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 22 15:14:53.562: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:14:53.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4266" for this suite.
Dec 22 15:14:59.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:14:59.859: INFO: namespace container-runtime-4266 deletion completed in 6.207429244s

• [SLOW TEST:17.251 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:14:59.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3083.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3083.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 15:15:14.055: INFO: File wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-2edb2245-6389-4ec2-9745-4947ca5ae637 contains '' instead of 'foo.example.com.'
Dec 22 15:15:14.059: INFO: File jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-2edb2245-6389-4ec2-9745-4947ca5ae637 contains '' instead of 'foo.example.com.'
Dec 22 15:15:14.059: INFO: Lookups using dns-3083/dns-test-2edb2245-6389-4ec2-9745-4947ca5ae637 failed for: [wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local]

Dec 22 15:15:19.081: INFO: DNS probes using dns-test-2edb2245-6389-4ec2-9745-4947ca5ae637 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3083.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3083.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 15:15:33.395: INFO: File wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 contains '' instead of 'bar.example.com.'
Dec 22 15:15:33.405: INFO: File jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 contains '' instead of 'bar.example.com.'
Dec 22 15:15:33.405: INFO: Lookups using dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 failed for: [wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local]

Dec 22 15:15:38.434: INFO: File wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 22 15:15:38.457: INFO: File jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 22 15:15:38.457: INFO: Lookups using dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 failed for: [wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local]

Dec 22 15:15:43.421: INFO: File wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 22 15:15:43.433: INFO: File jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 22 15:15:43.433: INFO: Lookups using dns-3083/dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 failed for: [wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local]

Dec 22 15:15:48.433: INFO: DNS probes using dns-test-676b2fc3-0488-4cff-a693-2ba321fe8466 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3083.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3083.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3083.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 15:16:02.679: INFO: File jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local from pod  dns-3083/dns-test-3e9f258a-5765-433e-ac5e-0f4252521bed contains '' instead of '10.96.210.27'
Dec 22 15:16:02.679: INFO: Lookups using dns-3083/dns-test-3e9f258a-5765-433e-ac5e-0f4252521bed failed for: [jessie_udp@dns-test-service-3.dns-3083.svc.cluster.local]

Dec 22 15:16:07.712: INFO: DNS probes using dns-test-3e9f258a-5765-433e-ac5e-0f4252521bed succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:16:07.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3083" for this suite.
Dec 22 15:16:15.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:16:16.083: INFO: namespace dns-3083 deletion completed in 8.134363022s

• [SLOW TEST:76.222 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:16:16.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 22 15:16:16.273: INFO: Number of nodes with available pods: 0
Dec 22 15:16:16.273: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:17.300: INFO: Number of nodes with available pods: 0
Dec 22 15:16:17.300: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:18.763: INFO: Number of nodes with available pods: 0
Dec 22 15:16:18.763: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:19.298: INFO: Number of nodes with available pods: 0
Dec 22 15:16:19.298: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:20.284: INFO: Number of nodes with available pods: 0
Dec 22 15:16:20.284: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:21.281: INFO: Number of nodes with available pods: 0
Dec 22 15:16:21.281: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:23.807: INFO: Number of nodes with available pods: 0
Dec 22 15:16:23.807: INFO: Node iruya-node is running more than one daemon pod
Dec 22 15:16:25.250: INFO: Number of nodes with available pods: 1
Dec 22 15:16:25.250: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:16:25.374: INFO: Number of nodes with available pods: 1
Dec 22 15:16:25.374: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:16:26.296: INFO: Number of nodes with available pods: 1
Dec 22 15:16:26.296: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 22 15:16:27.290: INFO: Number of nodes with available pods: 2
Dec 22 15:16:27.290: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 22 15:16:27.406: INFO: Number of nodes with available pods: 2
Dec 22 15:16:27.406: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7994, will wait for the garbage collector to delete the pods
Dec 22 15:16:28.522: INFO: Deleting DaemonSet.extensions daemon-set took: 21.459721ms
Dec 22 15:16:28.822: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.393669ms
Dec 22 15:16:37.954: INFO: Number of nodes with available pods: 0
Dec 22 15:16:37.954: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 15:16:37.957: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7994/daemonsets","resourceVersion":"17657245"},"items":null}

Dec 22 15:16:37.959: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7994/pods","resourceVersion":"17657245"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:16:37.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7994" for this suite.
Dec 22 15:16:44.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:16:44.097: INFO: namespace daemonsets-7994 deletion completed in 6.124722118s

• [SLOW TEST:28.014 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:16:44.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 22 15:16:54.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-bfae38f7-2079-4e37-bfc0-4b010a65c5b4 -c busybox-main-container --namespace=emptydir-2742 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 22 15:16:55.026: INFO: stderr: ""
Dec 22 15:16:55.027: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:16:55.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2742" for this suite.
Dec 22 15:17:01.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:17:01.199: INFO: namespace emptydir-2742 deletion completed in 6.160214806s

• [SLOW TEST:17.102 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:17:01.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 22 15:17:01.348: INFO: namespace kubectl-8314
Dec 22 15:17:01.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8314'
Dec 22 15:17:01.807: INFO: stderr: ""
Dec 22 15:17:01.807: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 22 15:17:04.049: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:04.049: INFO: Found 0 / 1
Dec 22 15:17:04.816: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:04.816: INFO: Found 0 / 1
Dec 22 15:17:05.813: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:05.813: INFO: Found 0 / 1
Dec 22 15:17:06.816: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:06.816: INFO: Found 0 / 1
Dec 22 15:17:07.818: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:07.818: INFO: Found 0 / 1
Dec 22 15:17:08.820: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:08.821: INFO: Found 0 / 1
Dec 22 15:17:09.818: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:09.818: INFO: Found 0 / 1
Dec 22 15:17:10.816: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:10.816: INFO: Found 1 / 1
Dec 22 15:17:10.816: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 22 15:17:10.822: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 15:17:10.822: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 22 15:17:10.822: INFO: wait on redis-master startup in kubectl-8314 
Dec 22 15:17:10.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5s2pv redis-master --namespace=kubectl-8314'
Dec 22 15:17:11.034: INFO: stderr: ""
Dec 22 15:17:11.034: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Dec 15:17:09.244 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Dec 15:17:09.244 # Server started, Redis version 3.2.12\n1:M 22 Dec 15:17:09.244 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Dec 15:17:09.244 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 22 15:17:11.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8314'
Dec 22 15:17:11.174: INFO: stderr: ""
Dec 22 15:17:11.174: INFO: stdout: "service/rm2 exposed\n"
Dec 22 15:17:11.192: INFO: Service rm2 in namespace kubectl-8314 found.
STEP: exposing service
Dec 22 15:17:13.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8314'
Dec 22 15:17:13.450: INFO: stderr: ""
Dec 22 15:17:13.450: INFO: stdout: "service/rm3 exposed\n"
Dec 22 15:17:13.511: INFO: Service rm3 in namespace kubectl-8314 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:17:15.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8314" for this suite.
Dec 22 15:17:39.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:17:39.768: INFO: namespace kubectl-8314 deletion completed in 24.228202473s

• [SLOW TEST:38.568 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:17:39.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 22 15:17:39.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 22 15:17:40.069: INFO: stderr: ""
Dec 22 15:17:40.069: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:17:40.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5711" for this suite.
Dec 22 15:17:46.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:17:46.282: INFO: namespace kubectl-5711 deletion completed in 6.206029991s

• [SLOW TEST:6.514 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:17:46.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-9a7a61dd-0b46-4a2d-a3e9-280b3532b107
STEP: Creating a pod to test consume configMaps
Dec 22 15:17:46.410: INFO: Waiting up to 5m0s for pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba" in namespace "configmap-3392" to be "success or failure"
Dec 22 15:17:46.420: INFO: Pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364185ms
Dec 22 15:17:48.433: INFO: Pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02284292s
Dec 22 15:17:50.443: INFO: Pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03276279s
Dec 22 15:17:52.451: INFO: Pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041190078s
Dec 22 15:17:54.464: INFO: Pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053676553s
STEP: Saw pod success
Dec 22 15:17:54.464: INFO: Pod "pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba" satisfied condition "success or failure"
Dec 22 15:17:54.469: INFO: Trying to get logs from node iruya-node pod pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba container configmap-volume-test: 
STEP: delete the pod
Dec 22 15:17:54.722: INFO: Waiting for pod pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba to disappear
Dec 22 15:17:54.735: INFO: Pod pod-configmaps-79cf6c4f-b80c-4c89-9d9b-613b1545b6ba no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:17:54.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3392" for this suite.
Dec 22 15:18:00.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:18:00.942: INFO: namespace configmap-3392 deletion completed in 6.179379938s

• [SLOW TEST:14.660 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:18:00.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8094
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 22 15:18:01.095: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 22 15:18:37.479: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-8094 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 15:18:37.479: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 15:18:37.942: INFO: Waiting for endpoints: map[]
Dec 22 15:18:37.953: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8094 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 15:18:37.953: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 15:18:38.320: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:18:38.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8094" for this suite.
Dec 22 15:19:02.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:19:02.456: INFO: namespace pod-network-test-8094 deletion completed in 24.126865471s

• [SLOW TEST:61.513 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:19:02.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 22 15:19:10.623: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 22 15:19:20.791: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:19:20.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1911" for this suite.
Dec 22 15:19:26.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:19:27.028: INFO: namespace pods-1911 deletion completed in 6.223094891s

• [SLOW TEST:24.571 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:19:27.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1222 15:19:57.706634       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 15:19:57.706: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:19:57.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2675" for this suite.
Dec 22 15:20:07.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:20:08.455: INFO: namespace gc-2675 deletion completed in 10.742038937s

• [SLOW TEST:41.426 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 22 15:20:08.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 22 15:20:08.613: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a" in namespace "downward-api-9961" to be "success or failure"
Dec 22 15:20:08.635: INFO: Pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.409805ms
Dec 22 15:20:10.643: INFO: Pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030647568s
Dec 22 15:20:13.736: INFO: Pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.123389315s
Dec 22 15:20:15.747: INFO: Pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.134429427s
Dec 22 15:20:17.755: INFO: Pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.142014008s
STEP: Saw pod success
Dec 22 15:20:17.755: INFO: Pod "downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a" satisfied condition "success or failure"
Dec 22 15:20:17.759: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a container client-container: 
STEP: delete the pod
Dec 22 15:20:18.201: INFO: Waiting for pod downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a to disappear
Dec 22 15:20:18.216: INFO: Pod downwardapi-volume-4284199a-9131-4efa-8527-dccd690a615a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 22 15:20:18.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9961" for this suite.
Dec 22 15:20:24.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 15:20:24.396: INFO: namespace downward-api-9961 deletion completed in 6.158947772s

• [SLOW TEST:15.940 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSDec 22 15:20:24.396: INFO: Running AfterSuite actions on all nodes
Dec 22 15:20:24.396: INFO: Running AfterSuite actions on node 1
Dec 22 15:20:24.396: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8652.086 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS