I0212 12:56:07.009209 8 e2e.go:243] Starting e2e run "f0b01e30-7752-4010-bc41-0bee554ca11a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581512165 - Will randomize all specs Will run 215 of 4412 specs Feb 12 12:56:07.298: INFO: >>> kubeConfig: /root/.kube/config Feb 12 12:56:07.301: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 12 12:56:07.323: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 12 12:56:07.348: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 12 12:56:07.348: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 12 12:56:07.348: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 12 12:56:07.359: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 12 12:56:07.359: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 12 12:56:07.359: INFO: e2e test version: v1.15.7 Feb 12 12:56:07.361: INFO: kube-apiserver version: v1.15.1 SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:56:07.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Feb 12 12:56:07.528: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:56:19.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2937" for this suite. Feb 12 12:56:25.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:56:25.970: INFO: namespace kubelet-test-2937 deletion completed in 6.180795928s • [SLOW TEST:18.609 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:56:25.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 12:56:26.108: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.357002ms)
Feb 12 12:56:26.113: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.04235ms)
Feb 12 12:56:26.118: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.682522ms)
Feb 12 12:56:26.121: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.332704ms)
Feb 12 12:56:26.124: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.107038ms)
Feb 12 12:56:26.128: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.08284ms)
Feb 12 12:56:26.135: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.535552ms)
Feb 12 12:56:26.232: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 96.481625ms)
Feb 12 12:56:26.237: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.745069ms)
Feb 12 12:56:26.244: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.852896ms)
Feb 12 12:56:26.250: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.697382ms)
Feb 12 12:56:26.256: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.905911ms)
Feb 12 12:56:26.262: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.364651ms)
Feb 12 12:56:26.268: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.291123ms)
Feb 12 12:56:26.273: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.110978ms)
Feb 12 12:56:26.276: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.355827ms)
Feb 12 12:56:26.280: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.437865ms)
Feb 12 12:56:26.283: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.092157ms)
Feb 12 12:56:26.286: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.097645ms)
Feb 12 12:56:26.289: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.028028ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:56:26.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4561" for this suite.
Feb 12 12:56:32.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:56:32.464: INFO: namespace proxy-4561 deletion completed in 6.171559031s

• [SLOW TEST:6.494 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:56:32.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:56:32.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f" in namespace "downward-api-2638" to be "success or failure"
Feb 12 12:56:32.619: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.501246ms
Feb 12 12:56:34.630: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022197113s
Feb 12 12:56:36.647: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038459326s
Feb 12 12:56:38.654: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045425959s
Feb 12 12:56:40.671: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062594525s
Feb 12 12:56:42.701: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093144109s
Feb 12 12:56:44.711: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.102907908s
Feb 12 12:56:46.730: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.122269633s
STEP: Saw pod success
Feb 12 12:56:46.731: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f" satisfied condition "success or failure"
Feb 12 12:56:46.738: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f container client-container: 
STEP: delete the pod
Feb 12 12:56:47.008: INFO: Waiting for pod downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f to disappear
Feb 12 12:56:47.019: INFO: Pod downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:56:47.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2638" for this suite.
Feb 12 12:56:53.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:56:53.169: INFO: namespace downward-api-2638 deletion completed in 6.141681664s

• [SLOW TEST:20.703 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:56:53.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-bae0ce5b-93bd-403e-a92a-09ef4dcd38e0
STEP: Creating a pod to test consume configMaps
Feb 12 12:56:53.323: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50" in namespace "projected-1947" to be "success or failure"
Feb 12 12:56:53.377: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 54.189089ms
Feb 12 12:56:55.387: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064097578s
Feb 12 12:56:57.402: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079065692s
Feb 12 12:56:59.414: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090711881s
Feb 12 12:57:01.425: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10250725s
Feb 12 12:57:03.441: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118345593s
STEP: Saw pod success
Feb 12 12:57:03.441: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50" satisfied condition "success or failure"
Feb 12 12:57:03.451: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 12:57:03.515: INFO: Waiting for pod pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50 to disappear
Feb 12 12:57:03.570: INFO: Pod pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:57:03.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1947" for this suite.
Feb 12 12:57:09.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:57:09.902: INFO: namespace projected-1947 deletion completed in 6.320821694s

• [SLOW TEST:16.733 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:57:09.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6e76ab71-6e73-4890-af83-b1a4f516851d
STEP: Creating a pod to test consume configMaps
Feb 12 12:57:10.182: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d" in namespace "projected-3709" to be "success or failure"
Feb 12 12:57:10.187: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.400259ms
Feb 12 12:57:12.602: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42020374s
Feb 12 12:57:14.616: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434690655s
Feb 12 12:57:16.630: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448647717s
Feb 12 12:57:18.647: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46490558s
Feb 12 12:57:20.665: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482834569s
STEP: Saw pod success
Feb 12 12:57:20.665: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d" satisfied condition "success or failure"
Feb 12 12:57:20.671: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 12:57:20.770: INFO: Waiting for pod pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d to disappear
Feb 12 12:57:20.776: INFO: Pod pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:57:20.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3709" for this suite.
Feb 12 12:57:26.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:57:27.048: INFO: namespace projected-3709 deletion completed in 6.264966202s

• [SLOW TEST:17.146 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:57:27.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:57:32.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5667" for this suite.
Feb 12 12:57:38.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:57:38.953: INFO: namespace watch-5667 deletion completed in 6.235972341s

• [SLOW TEST:11.904 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:57:38.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3727
I0212 12:57:39.394407       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3727, replica count: 1
I0212 12:57:40.445122       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:41.445531       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:42.445911       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:43.446646       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:44.447103       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:45.447572       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:46.448110       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:47.448794       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:48.449390       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:49.449983       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:57:50.450458       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 12 12:57:50.614: INFO: Created: latency-svc-cfp82
Feb 12 12:57:50.627: INFO: Got endpoints: latency-svc-cfp82 [76.422559ms]
Feb 12 12:57:50.724: INFO: Created: latency-svc-jwlbl
Feb 12 12:57:50.736: INFO: Got endpoints: latency-svc-jwlbl [107.254242ms]
Feb 12 12:57:50.800: INFO: Created: latency-svc-v4lhh
Feb 12 12:57:50.816: INFO: Got endpoints: latency-svc-v4lhh [186.917046ms]
Feb 12 12:57:50.939: INFO: Created: latency-svc-9jhcf
Feb 12 12:57:50.970: INFO: Got endpoints: latency-svc-9jhcf [342.482686ms]
Feb 12 12:57:51.009: INFO: Created: latency-svc-ljgjj
Feb 12 12:57:51.085: INFO: Got endpoints: latency-svc-ljgjj [456.264209ms]
Feb 12 12:57:51.124: INFO: Created: latency-svc-5dnf7
Feb 12 12:57:51.143: INFO: Got endpoints: latency-svc-5dnf7 [514.344421ms]
Feb 12 12:57:51.263: INFO: Created: latency-svc-x4gm7
Feb 12 12:57:51.277: INFO: Got endpoints: latency-svc-x4gm7 [647.638257ms]
Feb 12 12:57:51.344: INFO: Created: latency-svc-q8rvh
Feb 12 12:57:51.356: INFO: Got endpoints: latency-svc-q8rvh [727.133066ms]
Feb 12 12:57:51.443: INFO: Created: latency-svc-zgpm5
Feb 12 12:57:51.449: INFO: Got endpoints: latency-svc-zgpm5 [820.048702ms]
Feb 12 12:57:51.501: INFO: Created: latency-svc-mprkd
Feb 12 12:57:51.591: INFO: Got endpoints: latency-svc-mprkd [961.460171ms]
Feb 12 12:57:51.633: INFO: Created: latency-svc-v26nl
Feb 12 12:57:51.649: INFO: Got endpoints: latency-svc-v26nl [1.019338719s]
Feb 12 12:57:51.743: INFO: Created: latency-svc-wzhv7
Feb 12 12:57:51.768: INFO: Got endpoints: latency-svc-wzhv7 [1.139051934s]
Feb 12 12:57:51.801: INFO: Created: latency-svc-2bhpz
Feb 12 12:57:51.824: INFO: Got endpoints: latency-svc-2bhpz [1.195394752s]
Feb 12 12:57:51.904: INFO: Created: latency-svc-4s7mz
Feb 12 12:57:51.946: INFO: Got endpoints: latency-svc-4s7mz [1.316993419s]
Feb 12 12:57:51.948: INFO: Created: latency-svc-tfpvr
Feb 12 12:57:51.957: INFO: Got endpoints: latency-svc-tfpvr [1.328001728s]
Feb 12 12:57:52.081: INFO: Created: latency-svc-fkmcp
Feb 12 12:57:52.093: INFO: Got endpoints: latency-svc-fkmcp [1.464380844s]
Feb 12 12:57:52.178: INFO: Created: latency-svc-jkh9f
Feb 12 12:57:52.248: INFO: Got endpoints: latency-svc-jkh9f [1.511306473s]
Feb 12 12:57:52.284: INFO: Created: latency-svc-t6vq9
Feb 12 12:57:52.328: INFO: Got endpoints: latency-svc-t6vq9 [1.512350444s]
Feb 12 12:57:52.437: INFO: Created: latency-svc-vg5ct
Feb 12 12:57:52.452: INFO: Got endpoints: latency-svc-vg5ct [1.481611326s]
Feb 12 12:57:52.581: INFO: Created: latency-svc-dndt7
Feb 12 12:57:52.583: INFO: Got endpoints: latency-svc-dndt7 [1.496938012s]
Feb 12 12:57:52.633: INFO: Created: latency-svc-k8576
Feb 12 12:57:52.638: INFO: Got endpoints: latency-svc-k8576 [1.494410083s]
Feb 12 12:57:52.768: INFO: Created: latency-svc-6nkmq
Feb 12 12:57:52.845: INFO: Got endpoints: latency-svc-6nkmq [1.568048194s]
Feb 12 12:57:52.864: INFO: Created: latency-svc-hsflh
Feb 12 12:57:52.946: INFO: Got endpoints: latency-svc-hsflh [1.589368977s]
Feb 12 12:57:53.002: INFO: Created: latency-svc-vgmxd
Feb 12 12:57:53.546: INFO: Got endpoints: latency-svc-vgmxd [2.096629299s]
Feb 12 12:57:53.556: INFO: Created: latency-svc-bqf7g
Feb 12 12:57:53.797: INFO: Got endpoints: latency-svc-bqf7g [2.205813394s]
Feb 12 12:57:53.850: INFO: Created: latency-svc-spqbx
Feb 12 12:57:53.868: INFO: Got endpoints: latency-svc-spqbx [2.218839409s]
Feb 12 12:57:53.996: INFO: Created: latency-svc-b2tvp
Feb 12 12:57:53.996: INFO: Got endpoints: latency-svc-b2tvp [2.227688095s]
Feb 12 12:57:54.061: INFO: Created: latency-svc-jbn5f
Feb 12 12:57:54.103: INFO: Got endpoints: latency-svc-jbn5f [2.278488149s]
Feb 12 12:57:54.135: INFO: Created: latency-svc-mhthb
Feb 12 12:57:54.149: INFO: Got endpoints: latency-svc-mhthb [2.202841052s]
Feb 12 12:57:54.189: INFO: Created: latency-svc-zlqbd
Feb 12 12:57:54.245: INFO: Got endpoints: latency-svc-zlqbd [2.288423433s]
Feb 12 12:57:54.298: INFO: Created: latency-svc-7kfbz
Feb 12 12:57:54.316: INFO: Got endpoints: latency-svc-7kfbz [2.22240629s]
Feb 12 12:57:54.428: INFO: Created: latency-svc-m9tk2
Feb 12 12:57:54.461: INFO: Got endpoints: latency-svc-m9tk2 [2.212579202s]
Feb 12 12:57:54.522: INFO: Created: latency-svc-65djb
Feb 12 12:57:54.617: INFO: Got endpoints: latency-svc-65djb [2.288153156s]
Feb 12 12:57:54.620: INFO: Created: latency-svc-87ztl
Feb 12 12:57:54.634: INFO: Got endpoints: latency-svc-87ztl [2.181247315s]
Feb 12 12:57:54.696: INFO: Created: latency-svc-t59f2
Feb 12 12:57:54.772: INFO: Got endpoints: latency-svc-t59f2 [2.18919345s]
Feb 12 12:57:54.835: INFO: Created: latency-svc-j2b9c
Feb 12 12:57:54.848: INFO: Got endpoints: latency-svc-j2b9c [2.209731612s]
Feb 12 12:57:55.007: INFO: Created: latency-svc-7xfgf
Feb 12 12:57:55.016: INFO: Got endpoints: latency-svc-7xfgf [2.171001984s]
Feb 12 12:57:55.073: INFO: Created: latency-svc-58ggp
Feb 12 12:57:55.176: INFO: Got endpoints: latency-svc-58ggp [2.230116366s]
Feb 12 12:57:55.239: INFO: Created: latency-svc-g9w74
Feb 12 12:57:55.242: INFO: Got endpoints: latency-svc-g9w74 [1.695359518s]
Feb 12 12:57:55.396: INFO: Created: latency-svc-tfk95
Feb 12 12:57:55.440: INFO: Got endpoints: latency-svc-tfk95 [1.64260383s]
Feb 12 12:57:55.652: INFO: Created: latency-svc-fm659
Feb 12 12:57:55.661: INFO: Got endpoints: latency-svc-fm659 [1.792774288s]
Feb 12 12:57:55.727: INFO: Created: latency-svc-k6wj4
Feb 12 12:57:55.853: INFO: Got endpoints: latency-svc-k6wj4 [1.856335496s]
Feb 12 12:57:55.885: INFO: Created: latency-svc-42p9m
Feb 12 12:57:55.893: INFO: Got endpoints: latency-svc-42p9m [1.789365472s]
Feb 12 12:57:56.067: INFO: Created: latency-svc-ltl9j
Feb 12 12:57:56.114: INFO: Got endpoints: latency-svc-ltl9j [1.964521967s]
Feb 12 12:57:56.248: INFO: Created: latency-svc-pb8rk
Feb 12 12:57:56.267: INFO: Got endpoints: latency-svc-pb8rk [2.021521599s]
Feb 12 12:57:56.328: INFO: Created: latency-svc-8xjts
Feb 12 12:57:56.330: INFO: Got endpoints: latency-svc-8xjts [2.013710867s]
Feb 12 12:57:56.581: INFO: Created: latency-svc-fvjsv
Feb 12 12:57:56.607: INFO: Got endpoints: latency-svc-fvjsv [2.145720761s]
Feb 12 12:57:56.777: INFO: Created: latency-svc-xtxdk
Feb 12 12:57:56.805: INFO: Got endpoints: latency-svc-xtxdk [2.188079618s]
Feb 12 12:57:56.952: INFO: Created: latency-svc-p7vmk
Feb 12 12:57:56.964: INFO: Got endpoints: latency-svc-p7vmk [2.329915943s]
Feb 12 12:57:57.019: INFO: Created: latency-svc-ng6l7
Feb 12 12:57:57.026: INFO: Got endpoints: latency-svc-ng6l7 [2.254077547s]
Feb 12 12:57:57.265: INFO: Created: latency-svc-q7md6
Feb 12 12:57:57.277: INFO: Got endpoints: latency-svc-q7md6 [2.429018962s]
Feb 12 12:57:57.471: INFO: Created: latency-svc-xx76k
Feb 12 12:57:57.479: INFO: Got endpoints: latency-svc-xx76k [2.462815707s]
Feb 12 12:57:57.733: INFO: Created: latency-svc-zbd8w
Feb 12 12:57:57.905: INFO: Created: latency-svc-hjdvn
Feb 12 12:57:57.905: INFO: Got endpoints: latency-svc-zbd8w [2.728484083s]
Feb 12 12:57:57.936: INFO: Got endpoints: latency-svc-hjdvn [2.694200612s]
Feb 12 12:57:58.185: INFO: Created: latency-svc-hm84x
Feb 12 12:57:58.280: INFO: Got endpoints: latency-svc-hm84x [2.839974322s]
Feb 12 12:57:58.299: INFO: Created: latency-svc-pw8c4
Feb 12 12:57:58.428: INFO: Got endpoints: latency-svc-pw8c4 [2.767148796s]
Feb 12 12:57:58.522: INFO: Created: latency-svc-8n6jz
Feb 12 12:57:58.595: INFO: Got endpoints: latency-svc-8n6jz [2.741495008s]
Feb 12 12:57:58.666: INFO: Created: latency-svc-kl89g
Feb 12 12:57:58.668: INFO: Got endpoints: latency-svc-kl89g [2.775155909s]
Feb 12 12:57:58.778: INFO: Created: latency-svc-ql79p
Feb 12 12:57:58.790: INFO: Got endpoints: latency-svc-ql79p [2.675390516s]
Feb 12 12:57:58.839: INFO: Created: latency-svc-2q6cj
Feb 12 12:57:58.869: INFO: Got endpoints: latency-svc-2q6cj [2.601577063s]
Feb 12 12:57:59.036: INFO: Created: latency-svc-xn7gq
Feb 12 12:57:59.071: INFO: Got endpoints: latency-svc-xn7gq [2.740924015s]
Feb 12 12:57:59.409: INFO: Created: latency-svc-gpwdh
Feb 12 12:57:59.421: INFO: Got endpoints: latency-svc-gpwdh [2.813629219s]
Feb 12 12:58:00.064: INFO: Created: latency-svc-cv5fw
Feb 12 12:58:00.080: INFO: Got endpoints: latency-svc-cv5fw [3.274944986s]
Feb 12 12:58:00.319: INFO: Created: latency-svc-5jzjq
Feb 12 12:58:00.319: INFO: Got endpoints: latency-svc-5jzjq [3.355460069s]
Feb 12 12:58:00.387: INFO: Created: latency-svc-gl59t
Feb 12 12:58:00.457: INFO: Got endpoints: latency-svc-gl59t [3.430396006s]
Feb 12 12:58:00.523: INFO: Created: latency-svc-tx26g
Feb 12 12:58:00.533: INFO: Got endpoints: latency-svc-tx26g [3.255516931s]
Feb 12 12:58:00.676: INFO: Created: latency-svc-glpvw
Feb 12 12:58:00.685: INFO: Got endpoints: latency-svc-glpvw [3.204965262s]
Feb 12 12:58:00.810: INFO: Created: latency-svc-9s9hw
Feb 12 12:58:00.833: INFO: Got endpoints: latency-svc-9s9hw [299.530133ms]
Feb 12 12:58:00.887: INFO: Created: latency-svc-dnx4c
Feb 12 12:58:00.903: INFO: Got endpoints: latency-svc-dnx4c [2.99777492s]
Feb 12 12:58:01.012: INFO: Created: latency-svc-nvn5b
Feb 12 12:58:01.024: INFO: Got endpoints: latency-svc-nvn5b [3.087634318s]
Feb 12 12:58:01.130: INFO: Created: latency-svc-fcrwv
Feb 12 12:58:01.199: INFO: Got endpoints: latency-svc-fcrwv [2.917675389s]
Feb 12 12:58:01.269: INFO: Created: latency-svc-6q4bc
Feb 12 12:58:01.283: INFO: Got endpoints: latency-svc-6q4bc [2.854930006s]
Feb 12 12:58:01.346: INFO: Created: latency-svc-v4kgb
Feb 12 12:58:01.360: INFO: Got endpoints: latency-svc-v4kgb [2.764499644s]
Feb 12 12:58:01.453: INFO: Created: latency-svc-bxtlf
Feb 12 12:58:01.460: INFO: Got endpoints: latency-svc-bxtlf [2.791959869s]
Feb 12 12:58:01.507: INFO: Created: latency-svc-k2q6z
Feb 12 12:58:01.615: INFO: Created: latency-svc-mz9k2
Feb 12 12:58:01.623: INFO: Got endpoints: latency-svc-k2q6z [2.833411381s]
Feb 12 12:58:01.629: INFO: Got endpoints: latency-svc-mz9k2 [2.759296255s]
Feb 12 12:58:01.713: INFO: Created: latency-svc-nmszn
Feb 12 12:58:01.797: INFO: Got endpoints: latency-svc-nmszn [2.72569022s]
Feb 12 12:58:01.858: INFO: Created: latency-svc-6qrm2
Feb 12 12:58:01.880: INFO: Got endpoints: latency-svc-6qrm2 [2.458917309s]
Feb 12 12:58:01.966: INFO: Created: latency-svc-zmw78
Feb 12 12:58:01.977: INFO: Got endpoints: latency-svc-zmw78 [1.895992348s]
Feb 12 12:58:02.051: INFO: Created: latency-svc-mjd9j
Feb 12 12:58:02.078: INFO: Got endpoints: latency-svc-mjd9j [1.75843177s]
Feb 12 12:58:02.252: INFO: Created: latency-svc-kr86x
Feb 12 12:58:02.268: INFO: Got endpoints: latency-svc-kr86x [1.81108922s]
Feb 12 12:58:02.386: INFO: Created: latency-svc-9gwls
Feb 12 12:58:02.408: INFO: Got endpoints: latency-svc-9gwls [1.722929015s]
Feb 12 12:58:02.606: INFO: Created: latency-svc-lqdbx
Feb 12 12:58:02.632: INFO: Got endpoints: latency-svc-lqdbx [1.798385235s]
Feb 12 12:58:02.664: INFO: Created: latency-svc-t8bg7
Feb 12 12:58:02.687: INFO: Got endpoints: latency-svc-t8bg7 [1.78294941s]
Feb 12 12:58:02.808: INFO: Created: latency-svc-h75rv
Feb 12 12:58:02.880: INFO: Got endpoints: latency-svc-h75rv [1.855740152s]
Feb 12 12:58:02.884: INFO: Created: latency-svc-lwdhv
Feb 12 12:58:03.077: INFO: Got endpoints: latency-svc-lwdhv [1.877426937s]
Feb 12 12:58:03.105: INFO: Created: latency-svc-286dx
Feb 12 12:58:03.126: INFO: Got endpoints: latency-svc-286dx [1.842613153s]
Feb 12 12:58:03.302: INFO: Created: latency-svc-dbnhp
Feb 12 12:58:03.312: INFO: Got endpoints: latency-svc-dbnhp [1.951738718s]
Feb 12 12:58:03.393: INFO: Created: latency-svc-s68tr
Feb 12 12:58:04.008: INFO: Got endpoints: latency-svc-s68tr [2.547024511s]
Feb 12 12:58:04.046: INFO: Created: latency-svc-drmjw
Feb 12 12:58:04.102: INFO: Got endpoints: latency-svc-drmjw [2.478169095s]
Feb 12 12:58:04.322: INFO: Created: latency-svc-fxrqp
Feb 12 12:58:04.357: INFO: Got endpoints: latency-svc-fxrqp [2.727601173s]
Feb 12 12:58:04.534: INFO: Created: latency-svc-v2ldp
Feb 12 12:58:04.562: INFO: Got endpoints: latency-svc-v2ldp [2.764621512s]
Feb 12 12:58:04.636: INFO: Created: latency-svc-7b7kl
Feb 12 12:58:04.762: INFO: Got endpoints: latency-svc-7b7kl [2.881470018s]
Feb 12 12:58:04.869: INFO: Created: latency-svc-8lvkt
Feb 12 12:58:04.999: INFO: Got endpoints: latency-svc-8lvkt [3.022221588s]
Feb 12 12:58:05.058: INFO: Created: latency-svc-lsw2c
Feb 12 12:58:05.073: INFO: Got endpoints: latency-svc-lsw2c [2.995040707s]
Feb 12 12:58:05.294: INFO: Created: latency-svc-wpzzd
Feb 12 12:58:05.309: INFO: Got endpoints: latency-svc-wpzzd [3.040344296s]
Feb 12 12:58:05.597: INFO: Created: latency-svc-pdw96
Feb 12 12:58:05.619: INFO: Got endpoints: latency-svc-pdw96 [3.210685386s]
Feb 12 12:58:05.687: INFO: Created: latency-svc-jq5qb
Feb 12 12:58:05.792: INFO: Got endpoints: latency-svc-jq5qb [3.159066628s]
Feb 12 12:58:05.987: INFO: Created: latency-svc-mfq6q
Feb 12 12:58:06.008: INFO: Got endpoints: latency-svc-mfq6q [3.321554223s]
Feb 12 12:58:06.254: INFO: Created: latency-svc-qwwbz
Feb 12 12:58:06.282: INFO: Got endpoints: latency-svc-qwwbz [3.401337344s]
Feb 12 12:58:06.356: INFO: Created: latency-svc-ndq4r
Feb 12 12:58:06.534: INFO: Got endpoints: latency-svc-ndq4r [3.457271152s]
Feb 12 12:58:06.570: INFO: Created: latency-svc-htbg6
Feb 12 12:58:06.592: INFO: Got endpoints: latency-svc-htbg6 [3.465044051s]
Feb 12 12:58:06.811: INFO: Created: latency-svc-dqwhx
Feb 12 12:58:06.833: INFO: Got endpoints: latency-svc-dqwhx [3.520742862s]
Feb 12 12:58:06.986: INFO: Created: latency-svc-rpjc5
Feb 12 12:58:06.995: INFO: Got endpoints: latency-svc-rpjc5 [2.986766139s]
Feb 12 12:58:07.216: INFO: Created: latency-svc-vntjb
Feb 12 12:58:07.238: INFO: Got endpoints: latency-svc-vntjb [3.135504663s]
Feb 12 12:58:07.307: INFO: Created: latency-svc-2kjds
Feb 12 12:58:07.431: INFO: Got endpoints: latency-svc-2kjds [3.073964422s]
Feb 12 12:58:07.461: INFO: Created: latency-svc-72zh7
Feb 12 12:58:07.474: INFO: Got endpoints: latency-svc-72zh7 [2.911663547s]
Feb 12 12:58:07.724: INFO: Created: latency-svc-6pbbc
Feb 12 12:58:07.766: INFO: Got endpoints: latency-svc-6pbbc [3.002953188s]
Feb 12 12:58:07.802: INFO: Created: latency-svc-hrzw5
Feb 12 12:58:07.813: INFO: Got endpoints: latency-svc-hrzw5 [2.814291259s]
Feb 12 12:58:07.998: INFO: Created: latency-svc-n4x26
Feb 12 12:58:08.007: INFO: Got endpoints: latency-svc-n4x26 [2.934031002s]
Feb 12 12:58:08.220: INFO: Created: latency-svc-7b2qv
Feb 12 12:58:08.301: INFO: Got endpoints: latency-svc-7b2qv [2.992088861s]
Feb 12 12:58:08.306: INFO: Created: latency-svc-v7cc7
Feb 12 12:58:08.395: INFO: Got endpoints: latency-svc-v7cc7 [2.776641248s]
Feb 12 12:58:08.422: INFO: Created: latency-svc-ls88b
Feb 12 12:58:08.439: INFO: Got endpoints: latency-svc-ls88b [2.646892441s]
Feb 12 12:58:08.583: INFO: Created: latency-svc-4dqj9
Feb 12 12:58:08.595: INFO: Got endpoints: latency-svc-4dqj9 [2.586300733s]
Feb 12 12:58:08.661: INFO: Created: latency-svc-wktcp
Feb 12 12:58:08.972: INFO: Got endpoints: latency-svc-wktcp [2.689579942s]
Feb 12 12:58:08.978: INFO: Created: latency-svc-shqq4
Feb 12 12:58:08.996: INFO: Got endpoints: latency-svc-shqq4 [2.461622883s]
Feb 12 12:58:09.297: INFO: Created: latency-svc-r7v8l
Feb 12 12:58:09.312: INFO: Got endpoints: latency-svc-r7v8l [2.719961028s]
Feb 12 12:58:09.543: INFO: Created: latency-svc-b4pzh
Feb 12 12:58:09.562: INFO: Got endpoints: latency-svc-b4pzh [2.728954234s]
Feb 12 12:58:09.642: INFO: Created: latency-svc-hwjjp
Feb 12 12:58:09.730: INFO: Got endpoints: latency-svc-hwjjp [2.735001641s]
Feb 12 12:58:09.786: INFO: Created: latency-svc-tpcbs
Feb 12 12:58:09.797: INFO: Got endpoints: latency-svc-tpcbs [2.558829411s]
Feb 12 12:58:10.028: INFO: Created: latency-svc-7bfxw
Feb 12 12:58:10.035: INFO: Got endpoints: latency-svc-7bfxw [2.603922275s]
Feb 12 12:58:10.111: INFO: Created: latency-svc-xxc5q
Feb 12 12:58:10.121: INFO: Got endpoints: latency-svc-xxc5q [2.647174251s]
Feb 12 12:58:10.292: INFO: Created: latency-svc-5jgh5
Feb 12 12:58:10.300: INFO: Got endpoints: latency-svc-5jgh5 [2.533548097s]
Feb 12 12:58:10.493: INFO: Created: latency-svc-pl999
Feb 12 12:58:10.521: INFO: Got endpoints: latency-svc-pl999 [2.707389997s]
Feb 12 12:58:10.722: INFO: Created: latency-svc-fpxtk
Feb 12 12:58:10.776: INFO: Got endpoints: latency-svc-fpxtk [2.768109515s]
Feb 12 12:58:11.124: INFO: Created: latency-svc-gkxkh
Feb 12 12:58:11.345: INFO: Got endpoints: latency-svc-gkxkh [3.042870723s]
Feb 12 12:58:11.348: INFO: Created: latency-svc-8s4j6
Feb 12 12:58:11.355: INFO: Got endpoints: latency-svc-8s4j6 [2.958836261s]
Feb 12 12:58:11.672: INFO: Created: latency-svc-5b8fl
Feb 12 12:58:11.677: INFO: Got endpoints: latency-svc-5b8fl [3.237931192s]
Feb 12 12:58:11.917: INFO: Created: latency-svc-mr6q8
Feb 12 12:58:11.931: INFO: Got endpoints: latency-svc-mr6q8 [3.335301705s]
Feb 12 12:58:12.256: INFO: Created: latency-svc-f6zxt
Feb 12 12:58:12.265: INFO: Got endpoints: latency-svc-f6zxt [3.292509783s]
Feb 12 12:58:12.540: INFO: Created: latency-svc-l5fw8
Feb 12 12:58:12.540: INFO: Got endpoints: latency-svc-l5fw8 [3.543795418s]
Feb 12 12:58:12.813: INFO: Created: latency-svc-rzhxh
Feb 12 12:58:12.861: INFO: Got endpoints: latency-svc-rzhxh [3.549029261s]
Feb 12 12:58:13.037: INFO: Created: latency-svc-gmkn4
Feb 12 12:58:13.043: INFO: Got endpoints: latency-svc-gmkn4 [3.480604372s]
Feb 12 12:58:13.261: INFO: Created: latency-svc-t67zb
Feb 12 12:58:13.518: INFO: Got endpoints: latency-svc-t67zb [3.787465086s]
Feb 12 12:58:13.531: INFO: Created: latency-svc-j7hhk
Feb 12 12:58:13.533: INFO: Got endpoints: latency-svc-j7hhk [3.736563896s]
Feb 12 12:58:13.594: INFO: Created: latency-svc-nbrdl
Feb 12 12:58:13.598: INFO: Got endpoints: latency-svc-nbrdl [3.562585304s]
Feb 12 12:58:13.757: INFO: Created: latency-svc-2bs7x
Feb 12 12:58:13.770: INFO: Got endpoints: latency-svc-2bs7x [3.64889599s]
Feb 12 12:58:13.982: INFO: Created: latency-svc-zqkdx
Feb 12 12:58:13.990: INFO: Got endpoints: latency-svc-zqkdx [3.689726697s]
Feb 12 12:58:14.180: INFO: Created: latency-svc-qlcmr
Feb 12 12:58:14.187: INFO: Got endpoints: latency-svc-qlcmr [3.665374407s]
Feb 12 12:58:14.377: INFO: Created: latency-svc-ff679
Feb 12 12:58:14.399: INFO: Got endpoints: latency-svc-ff679 [3.621893738s]
Feb 12 12:58:14.462: INFO: Created: latency-svc-47xdc
Feb 12 12:58:14.464: INFO: Got endpoints: latency-svc-47xdc [3.119036029s]
Feb 12 12:58:14.599: INFO: Created: latency-svc-vhplz
Feb 12 12:58:14.745: INFO: Got endpoints: latency-svc-vhplz [3.390387449s]
Feb 12 12:58:14.749: INFO: Created: latency-svc-6rl48
Feb 12 12:58:14.758: INFO: Got endpoints: latency-svc-6rl48 [3.081159223s]
Feb 12 12:58:14.913: INFO: Created: latency-svc-fqv68
Feb 12 12:58:14.930: INFO: Got endpoints: latency-svc-fqv68 [2.998880707s]
Feb 12 12:58:14.997: INFO: Created: latency-svc-njjxx
Feb 12 12:58:15.128: INFO: Got endpoints: latency-svc-njjxx [2.862811311s]
Feb 12 12:58:15.167: INFO: Created: latency-svc-zmwvn
Feb 12 12:58:15.209: INFO: Got endpoints: latency-svc-zmwvn [2.668833045s]
Feb 12 12:58:15.227: INFO: Created: latency-svc-ds8fz
Feb 12 12:58:15.231: INFO: Got endpoints: latency-svc-ds8fz [2.369552608s]
Feb 12 12:58:15.335: INFO: Created: latency-svc-cjfms
Feb 12 12:58:15.339: INFO: Got endpoints: latency-svc-cjfms [2.296326382s]
Feb 12 12:58:15.530: INFO: Created: latency-svc-t4jlx
Feb 12 12:58:15.547: INFO: Got endpoints: latency-svc-t4jlx [2.028056484s]
Feb 12 12:58:15.897: INFO: Created: latency-svc-f5vt4
Feb 12 12:58:15.909: INFO: Got endpoints: latency-svc-f5vt4 [2.376024468s]
Feb 12 12:58:16.143: INFO: Created: latency-svc-sk9k7
Feb 12 12:58:16.160: INFO: Got endpoints: latency-svc-sk9k7 [2.56206083s]
Feb 12 12:58:16.239: INFO: Created: latency-svc-6xvx6
Feb 12 12:58:16.247: INFO: Got endpoints: latency-svc-6xvx6 [2.476955818s]
Feb 12 12:58:16.381: INFO: Created: latency-svc-vwcc8
Feb 12 12:58:16.389: INFO: Got endpoints: latency-svc-vwcc8 [2.399621233s]
Feb 12 12:58:16.586: INFO: Created: latency-svc-2xk44
Feb 12 12:58:16.667: INFO: Created: latency-svc-gtlqs
Feb 12 12:58:16.667: INFO: Got endpoints: latency-svc-2xk44 [2.480074431s]
Feb 12 12:58:16.674: INFO: Got endpoints: latency-svc-gtlqs [2.275557777s]
Feb 12 12:58:16.877: INFO: Created: latency-svc-hv5ls
Feb 12 12:58:16.889: INFO: Got endpoints: latency-svc-hv5ls [2.424706548s]
Feb 12 12:58:17.067: INFO: Created: latency-svc-jf78m
Feb 12 12:58:17.068: INFO: Got endpoints: latency-svc-jf78m [2.322850475s]
Feb 12 12:58:17.147: INFO: Created: latency-svc-4zdcn
Feb 12 12:58:17.213: INFO: Got endpoints: latency-svc-4zdcn [2.454248677s]
Feb 12 12:58:17.297: INFO: Created: latency-svc-5q6wn
Feb 12 12:58:17.384: INFO: Got endpoints: latency-svc-5q6wn [2.453573925s]
Feb 12 12:58:17.415: INFO: Created: latency-svc-9jdcn
Feb 12 12:58:17.437: INFO: Got endpoints: latency-svc-9jdcn [2.309025662s]
Feb 12 12:58:17.598: INFO: Created: latency-svc-w4jf6
Feb 12 12:58:17.632: INFO: Got endpoints: latency-svc-w4jf6 [2.42234501s]
Feb 12 12:58:17.678: INFO: Created: latency-svc-gdkrs
Feb 12 12:58:17.908: INFO: Got endpoints: latency-svc-gdkrs [2.676807258s]
Feb 12 12:58:17.909: INFO: Created: latency-svc-cdj59
Feb 12 12:58:17.921: INFO: Got endpoints: latency-svc-cdj59 [2.582011833s]
Feb 12 12:58:18.101: INFO: Created: latency-svc-hhkvj
Feb 12 12:58:18.124: INFO: Got endpoints: latency-svc-hhkvj [2.577247314s]
Feb 12 12:58:18.282: INFO: Created: latency-svc-pjfn9
Feb 12 12:58:18.287: INFO: Got endpoints: latency-svc-pjfn9 [2.377854777s]
Feb 12 12:58:18.359: INFO: Created: latency-svc-cbrkc
Feb 12 12:58:18.482: INFO: Got endpoints: latency-svc-cbrkc [2.322344312s]
Feb 12 12:58:18.518: INFO: Created: latency-svc-csnqc
Feb 12 12:58:18.526: INFO: Got endpoints: latency-svc-csnqc [2.279104446s]
Feb 12 12:58:18.726: INFO: Created: latency-svc-gxtfh
Feb 12 12:58:18.731: INFO: Got endpoints: latency-svc-gxtfh [2.34093843s]
Feb 12 12:58:18.914: INFO: Created: latency-svc-2wq62
Feb 12 12:58:18.923: INFO: Got endpoints: latency-svc-2wq62 [2.255865303s]
Feb 12 12:58:19.218: INFO: Created: latency-svc-5mzwl
Feb 12 12:58:19.238: INFO: Got endpoints: latency-svc-5mzwl [2.563241133s]
Feb 12 12:58:19.565: INFO: Created: latency-svc-r698j
Feb 12 12:58:19.674: INFO: Got endpoints: latency-svc-r698j [2.785056548s]
Feb 12 12:58:19.713: INFO: Created: latency-svc-w2mcd
Feb 12 12:58:19.720: INFO: Got endpoints: latency-svc-w2mcd [2.651203691s]
Feb 12 12:58:19.890: INFO: Created: latency-svc-sz8v5
Feb 12 12:58:19.907: INFO: Got endpoints: latency-svc-sz8v5 [2.69363955s]
Feb 12 12:58:20.174: INFO: Created: latency-svc-scsws
Feb 12 12:58:20.190: INFO: Got endpoints: latency-svc-scsws [2.806270484s]
Feb 12 12:58:20.447: INFO: Created: latency-svc-d2w5x
Feb 12 12:58:20.457: INFO: Got endpoints: latency-svc-d2w5x [3.020303189s]
Feb 12 12:58:20.595: INFO: Created: latency-svc-kn25q
Feb 12 12:58:20.611: INFO: Got endpoints: latency-svc-kn25q [2.979345486s]
Feb 12 12:58:20.665: INFO: Created: latency-svc-nwqpp
Feb 12 12:58:20.669: INFO: Got endpoints: latency-svc-nwqpp [2.759780107s]
Feb 12 12:58:20.770: INFO: Created: latency-svc-dx4lt
Feb 12 12:58:20.831: INFO: Got endpoints: latency-svc-dx4lt [2.909697766s]
Feb 12 12:58:20.834: INFO: Created: latency-svc-ws6hm
Feb 12 12:58:20.980: INFO: Got endpoints: latency-svc-ws6hm [2.855247985s]
Feb 12 12:58:20.986: INFO: Created: latency-svc-wc4fq
Feb 12 12:58:21.004: INFO: Got endpoints: latency-svc-wc4fq [2.715825832s]
Feb 12 12:58:21.216: INFO: Created: latency-svc-8xxgq
Feb 12 12:58:21.240: INFO: Got endpoints: latency-svc-8xxgq [2.757453985s]
Feb 12 12:58:21.314: INFO: Created: latency-svc-428hz
Feb 12 12:58:21.423: INFO: Got endpoints: latency-svc-428hz [2.89600138s]
Feb 12 12:58:21.516: INFO: Created: latency-svc-tlwxr
Feb 12 12:58:21.517: INFO: Got endpoints: latency-svc-tlwxr [2.786157588s]
Feb 12 12:58:21.682: INFO: Created: latency-svc-rxkgn
Feb 12 12:58:21.691: INFO: Got endpoints: latency-svc-rxkgn [2.768047863s]
Feb 12 12:58:21.751: INFO: Created: latency-svc-898sn
Feb 12 12:58:21.766: INFO: Got endpoints: latency-svc-898sn [2.527770632s]
Feb 12 12:58:21.897: INFO: Created: latency-svc-cvp94
Feb 12 12:58:21.910: INFO: Got endpoints: latency-svc-cvp94 [2.235294214s]
Feb 12 12:58:22.184: INFO: Created: latency-svc-jjrc5
Feb 12 12:58:22.190: INFO: Got endpoints: latency-svc-jjrc5 [2.470468766s]
Feb 12 12:58:22.285: INFO: Created: latency-svc-645nj
Feb 12 12:58:22.380: INFO: Got endpoints: latency-svc-645nj [2.472806348s]
Feb 12 12:58:22.485: INFO: Created: latency-svc-b8jjd
Feb 12 12:58:22.625: INFO: Got endpoints: latency-svc-b8jjd [2.434208209s]
Feb 12 12:58:22.680: INFO: Created: latency-svc-swrmv
Feb 12 12:58:22.681: INFO: Got endpoints: latency-svc-swrmv [2.22246439s]
Feb 12 12:58:22.823: INFO: Created: latency-svc-6kcdz
Feb 12 12:58:22.844: INFO: Got endpoints: latency-svc-6kcdz [2.232318213s]
Feb 12 12:58:22.904: INFO: Created: latency-svc-w57kq
Feb 12 12:58:23.066: INFO: Got endpoints: latency-svc-w57kq [2.397204081s]
Feb 12 12:58:23.159: INFO: Created: latency-svc-w5k5s
Feb 12 12:58:23.249: INFO: Got endpoints: latency-svc-w5k5s [2.41691614s]
Feb 12 12:58:23.309: INFO: Created: latency-svc-qdfs7
Feb 12 12:58:23.322: INFO: Got endpoints: latency-svc-qdfs7 [2.341976307s]
Feb 12 12:58:23.434: INFO: Created: latency-svc-p57hc
Feb 12 12:58:23.607: INFO: Got endpoints: latency-svc-p57hc [2.602628955s]
Feb 12 12:58:23.610: INFO: Created: latency-svc-9lkrp
Feb 12 12:58:23.624: INFO: Got endpoints: latency-svc-9lkrp [2.383486133s]
Feb 12 12:58:23.684: INFO: Created: latency-svc-nvmcn
Feb 12 12:58:23.757: INFO: Got endpoints: latency-svc-nvmcn [2.334440195s]
Feb 12 12:58:23.795: INFO: Created: latency-svc-5qrw4
Feb 12 12:58:23.807: INFO: Got endpoints: latency-svc-5qrw4 [2.290258491s]
Feb 12 12:58:23.959: INFO: Created: latency-svc-gsp4n
Feb 12 12:58:24.013: INFO: Got endpoints: latency-svc-gsp4n [2.321379419s]
Feb 12 12:58:24.014: INFO: Created: latency-svc-7rvn4
Feb 12 12:58:24.023: INFO: Got endpoints: latency-svc-7rvn4 [2.257333723s]
Feb 12 12:58:24.550: INFO: Created: latency-svc-9rjk7
Feb 12 12:58:24.565: INFO: Got endpoints: latency-svc-9rjk7 [2.655492392s]
Feb 12 12:58:24.566: INFO: Latencies: [107.254242ms 186.917046ms 299.530133ms 342.482686ms 456.264209ms 514.344421ms 647.638257ms 727.133066ms 820.048702ms 961.460171ms 1.019338719s 1.139051934s 1.195394752s 1.316993419s 1.328001728s 1.464380844s 1.481611326s 1.494410083s 1.496938012s 1.511306473s 1.512350444s 1.568048194s 1.589368977s 1.64260383s 1.695359518s 1.722929015s 1.75843177s 1.78294941s 1.789365472s 1.792774288s 1.798385235s 1.81108922s 1.842613153s 1.855740152s 1.856335496s 1.877426937s 1.895992348s 1.951738718s 1.964521967s 2.013710867s 2.021521599s 2.028056484s 2.096629299s 2.145720761s 2.171001984s 2.181247315s 2.188079618s 2.18919345s 2.202841052s 2.205813394s 2.209731612s 2.212579202s 2.218839409s 2.22240629s 2.22246439s 2.227688095s 2.230116366s 2.232318213s 2.235294214s 2.254077547s 2.255865303s 2.257333723s 2.275557777s 2.278488149s 2.279104446s 2.288153156s 2.288423433s 2.290258491s 2.296326382s 2.309025662s 2.321379419s 2.322344312s 2.322850475s 2.329915943s 2.334440195s 2.34093843s 2.341976307s 2.369552608s 2.376024468s 2.377854777s 2.383486133s 2.397204081s 2.399621233s 2.41691614s 2.42234501s 2.424706548s 2.429018962s 2.434208209s 2.453573925s 2.454248677s 2.458917309s 2.461622883s 2.462815707s 2.470468766s 2.472806348s 2.476955818s 2.478169095s 2.480074431s 2.527770632s 2.533548097s 2.547024511s 2.558829411s 2.56206083s 2.563241133s 2.577247314s 2.582011833s 2.586300733s 2.601577063s 2.602628955s 2.603922275s 2.646892441s 2.647174251s 2.651203691s 2.655492392s 2.668833045s 2.675390516s 2.676807258s 2.689579942s 2.69363955s 2.694200612s 2.707389997s 2.715825832s 2.719961028s 2.72569022s 2.727601173s 2.728484083s 2.728954234s 2.735001641s 2.740924015s 2.741495008s 2.757453985s 2.759296255s 2.759780107s 2.764499644s 2.764621512s 2.767148796s 2.768047863s 2.768109515s 2.775155909s 2.776641248s 2.785056548s 2.786157588s 2.791959869s 2.806270484s 2.813629219s 2.814291259s 2.833411381s 2.839974322s 2.854930006s 2.855247985s 2.862811311s 2.881470018s 2.89600138s 2.909697766s 2.911663547s 2.917675389s 2.934031002s 2.958836261s 2.979345486s 2.986766139s 2.992088861s 2.995040707s 2.99777492s 2.998880707s 3.002953188s 3.020303189s 3.022221588s 3.040344296s 3.042870723s 3.073964422s 3.081159223s 3.087634318s 3.119036029s 3.135504663s 3.159066628s 3.204965262s 3.210685386s 3.237931192s 3.255516931s 3.274944986s 3.292509783s 3.321554223s 3.335301705s 3.355460069s 3.390387449s 3.401337344s 3.430396006s 3.457271152s 3.465044051s 3.480604372s 3.520742862s 3.543795418s 3.549029261s 3.562585304s 3.621893738s 3.64889599s 3.665374407s 3.689726697s 3.736563896s 3.787465086s]
Feb 12 12:58:24.566: INFO: 50 %ile: 2.547024511s
Feb 12 12:58:24.566: INFO: 90 %ile: 3.292509783s
Feb 12 12:58:24.566: INFO: 99 %ile: 3.736563896s
Feb 12 12:58:24.566: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:58:24.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3727" for this suite.
Feb 12 12:59:10.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:59:10.690: INFO: namespace svc-latency-3727 deletion completed in 46.112706927s

• [SLOW TEST:91.737 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:59:10.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-a4db2acc-8a1e-4260-a203-e527c38e7589
STEP: Creating a pod to test consume configMaps
Feb 12 12:59:10.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3" in namespace "configmap-7574" to be "success or failure"
Feb 12 12:59:11.072: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 153.14907ms
Feb 12 12:59:13.079: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1602465s
Feb 12 12:59:15.086: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167120739s
Feb 12 12:59:17.094: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175444404s
Feb 12 12:59:19.102: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183880953s
Feb 12 12:59:21.184: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265795336s
Feb 12 12:59:23.190: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.271591021s
Feb 12 12:59:25.201: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.281992596s
STEP: Saw pod success
Feb 12 12:59:25.201: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3" satisfied condition "success or failure"
Feb 12 12:59:25.203: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3 container configmap-volume-test: 
STEP: delete the pod
Feb 12 12:59:25.292: INFO: Waiting for pod pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3 to disappear
Feb 12 12:59:25.296: INFO: Pod pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:59:25.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7574" for this suite.
Feb 12 12:59:32.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:59:32.221: INFO: namespace configmap-7574 deletion completed in 6.91916899s

• [SLOW TEST:21.531 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:59:32.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 12 12:59:32.470: INFO: Waiting up to 5m0s for pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0" in namespace "downward-api-2335" to be "success or failure"
Feb 12 12:59:32.624: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 153.674128ms
Feb 12 12:59:34.641: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170459702s
Feb 12 12:59:36.664: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193919456s
Feb 12 12:59:38.673: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203006753s
Feb 12 12:59:40.688: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217790865s
Feb 12 12:59:42.710: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.239857383s
STEP: Saw pod success
Feb 12 12:59:42.710: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0" satisfied condition "success or failure"
Feb 12 12:59:42.724: INFO: Trying to get logs from node iruya-node pod downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0 container dapi-container: 
STEP: delete the pod
Feb 12 12:59:42.821: INFO: Waiting for pod downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0 to disappear
Feb 12 12:59:42.826: INFO: Pod downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 12:59:42.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2335" for this suite.
Feb 12 12:59:48.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:59:48.994: INFO: namespace downward-api-2335 deletion completed in 6.162256138s

• [SLOW TEST:16.772 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 12:59:48.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 12 12:59:49.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5915'
Feb 12 12:59:51.255: INFO: stderr: ""
Feb 12 12:59:51.255: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 12:59:51.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915'
Feb 12 12:59:51.455: INFO: stderr: ""
Feb 12 12:59:51.455: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls "
Feb 12 12:59:51.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 12:59:53.103: INFO: stderr: ""
Feb 12 12:59:53.103: INFO: stdout: ""
Feb 12 12:59:53.103: INFO: update-demo-nautilus-48bk2 is created but not running
Feb 12 12:59:58.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915'
Feb 12 12:59:58.311: INFO: stderr: ""
Feb 12 12:59:58.311: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls "
Feb 12 12:59:58.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 12:59:58.608: INFO: stderr: ""
Feb 12 12:59:58.608: INFO: stdout: ""
Feb 12 12:59:58.608: INFO: update-demo-nautilus-48bk2 is created but not running
Feb 12 13:00:03.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915'
Feb 12 13:00:03.823: INFO: stderr: ""
Feb 12 13:00:03.823: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls "
Feb 12 13:00:03.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 13:00:04.328: INFO: stderr: ""
Feb 12 13:00:04.329: INFO: stdout: ""
Feb 12 13:00:04.329: INFO: update-demo-nautilus-48bk2 is created but not running
Feb 12 13:00:09.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915'
Feb 12 13:00:09.521: INFO: stderr: ""
Feb 12 13:00:09.521: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls "
Feb 12 13:00:09.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 13:00:09.671: INFO: stderr: ""
Feb 12 13:00:09.672: INFO: stdout: "true"
Feb 12 13:00:09.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 13:00:09.770: INFO: stderr: ""
Feb 12 13:00:09.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:00:09.771: INFO: validating pod update-demo-nautilus-48bk2
Feb 12 13:00:09.824: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:00:09.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:00:09.824: INFO: update-demo-nautilus-48bk2 is verified up and running
Feb 12 13:00:09.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbqls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 13:00:09.908: INFO: stderr: ""
Feb 12 13:00:09.908: INFO: stdout: "true"
Feb 12 13:00:09.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbqls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5915'
Feb 12 13:00:10.043: INFO: stderr: ""
Feb 12 13:00:10.043: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:00:10.043: INFO: validating pod update-demo-nautilus-kbqls
Feb 12 13:00:10.077: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:00:10.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:00:10.077: INFO: update-demo-nautilus-kbqls is verified up and running
STEP: using delete to clean up resources
Feb 12 13:00:10.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5915'
Feb 12 13:00:10.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:00:10.217: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 12 13:00:10.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5915'
Feb 12 13:00:10.351: INFO: stderr: "No resources found.\n"
Feb 12 13:00:10.351: INFO: stdout: ""
Feb 12 13:00:10.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5915 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 13:00:10.494: INFO: stderr: ""
Feb 12 13:00:10.494: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:00:10.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5915" for this suite.
Feb 12 13:00:34.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:00:34.766: INFO: namespace kubectl-5915 deletion completed in 24.262399439s

• [SLOW TEST:45.771 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:00:34.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 13:00:35.056: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 12 13:00:35.174: INFO: Number of nodes with available pods: 0
Feb 12 13:00:35.174: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 12 13:00:35.636: INFO: Number of nodes with available pods: 0
Feb 12 13:00:35.636: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:36.690: INFO: Number of nodes with available pods: 0
Feb 12 13:00:36.691: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:37.644: INFO: Number of nodes with available pods: 0
Feb 12 13:00:37.644: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:38.655: INFO: Number of nodes with available pods: 0
Feb 12 13:00:38.655: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:39.645: INFO: Number of nodes with available pods: 0
Feb 12 13:00:39.645: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:40.645: INFO: Number of nodes with available pods: 0
Feb 12 13:00:40.645: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:41.646: INFO: Number of nodes with available pods: 0
Feb 12 13:00:41.646: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:43.333: INFO: Number of nodes with available pods: 0
Feb 12 13:00:43.333: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:43.645: INFO: Number of nodes with available pods: 0
Feb 12 13:00:43.645: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:44.926: INFO: Number of nodes with available pods: 0
Feb 12 13:00:44.926: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:45.645: INFO: Number of nodes with available pods: 0
Feb 12 13:00:45.645: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:46.681: INFO: Number of nodes with available pods: 1
Feb 12 13:00:46.681: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 12 13:00:46.740: INFO: Number of nodes with available pods: 1
Feb 12 13:00:46.740: INFO: Number of running nodes: 0, number of available pods: 1
Feb 12 13:00:47.755: INFO: Number of nodes with available pods: 0
Feb 12 13:00:47.755: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 12 13:00:47.779: INFO: Number of nodes with available pods: 0
Feb 12 13:00:47.779: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:48.793: INFO: Number of nodes with available pods: 0
Feb 12 13:00:48.793: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:49.789: INFO: Number of nodes with available pods: 0
Feb 12 13:00:49.789: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:50.796: INFO: Number of nodes with available pods: 0
Feb 12 13:00:50.796: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:51.788: INFO: Number of nodes with available pods: 0
Feb 12 13:00:51.788: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:52.790: INFO: Number of nodes with available pods: 0
Feb 12 13:00:52.790: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:53.787: INFO: Number of nodes with available pods: 0
Feb 12 13:00:53.787: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:54.789: INFO: Number of nodes with available pods: 0
Feb 12 13:00:54.789: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:55.790: INFO: Number of nodes with available pods: 0
Feb 12 13:00:55.791: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:56.795: INFO: Number of nodes with available pods: 0
Feb 12 13:00:56.796: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:57.789: INFO: Number of nodes with available pods: 0
Feb 12 13:00:57.789: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:58.789: INFO: Number of nodes with available pods: 0
Feb 12 13:00:58.789: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:00:59.786: INFO: Number of nodes with available pods: 0
Feb 12 13:00:59.786: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:01:00.788: INFO: Number of nodes with available pods: 0
Feb 12 13:01:00.788: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:01:01.814: INFO: Number of nodes with available pods: 0
Feb 12 13:01:01.814: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:01:02.798: INFO: Number of nodes with available pods: 0
Feb 12 13:01:02.798: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:01:03.803: INFO: Number of nodes with available pods: 0
Feb 12 13:01:03.803: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:01:04.790: INFO: Number of nodes with available pods: 1
Feb 12 13:01:04.790: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-601, will wait for the garbage collector to delete the pods
Feb 12 13:01:04.879: INFO: Deleting DaemonSet.extensions daemon-set took: 19.890369ms
Feb 12 13:01:05.180: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.406122ms
Feb 12 13:01:11.089: INFO: Number of nodes with available pods: 0
Feb 12 13:01:11.089: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 13:01:11.097: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-601/daemonsets","resourceVersion":"24069853"},"items":null}

Feb 12 13:01:11.101: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-601/pods","resourceVersion":"24069853"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:01:11.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-601" for this suite.
Feb 12 13:01:17.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:01:17.298: INFO: namespace daemonsets-601 deletion completed in 6.146764546s

• [SLOW TEST:42.532 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:01:17.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:01:17.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73" in namespace "projected-5303" to be "success or failure"
Feb 12 13:01:17.418: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 23.238491ms
Feb 12 13:01:19.426: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031832676s
Feb 12 13:01:21.434: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039756228s
Feb 12 13:01:23.448: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053120001s
Feb 12 13:01:25.469: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Running", Reason="", readiness=true. Elapsed: 8.074216942s
Feb 12 13:01:27.477: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082423922s
STEP: Saw pod success
Feb 12 13:01:27.477: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73" satisfied condition "success or failure"
Feb 12 13:01:27.481: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73 container client-container: 
STEP: delete the pod
Feb 12 13:01:27.604: INFO: Waiting for pod downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73 to disappear
Feb 12 13:01:27.614: INFO: Pod downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:01:27.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5303" for this suite.
Feb 12 13:01:33.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:01:33.774: INFO: namespace projected-5303 deletion completed in 6.153858904s

• [SLOW TEST:16.475 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:01:33.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-adc80c7c-4c4e-4fd8-babd-335bf65df458
STEP: Creating a pod to test consume secrets
Feb 12 13:01:34.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c" in namespace "projected-9597" to be "success or failure"
Feb 12 13:01:34.156: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.320418ms
Feb 12 13:01:36.164: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020087657s
Feb 12 13:01:38.178: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03341153s
Feb 12 13:01:40.188: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043645675s
Feb 12 13:01:42.201: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057284793s
Feb 12 13:01:44.209: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065195231s
Feb 12 13:01:46.216: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.072048485s
STEP: Saw pod success
Feb 12 13:01:46.216: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c" satisfied condition "success or failure"
Feb 12 13:01:46.219: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 13:01:46.438: INFO: Waiting for pod pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c to disappear
Feb 12 13:01:46.448: INFO: Pod pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:01:46.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9597" for this suite.
Feb 12 13:01:52.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:01:52.682: INFO: namespace projected-9597 deletion completed in 6.209553231s

• [SLOW TEST:18.908 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:01:52.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-bb33293e-ce13-41b9-afa2-9a3fb3551c25
STEP: Creating secret with name s-test-opt-upd-6fb24702-74a7-4310-ba9a-6d9f7b70b5ef
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-bb33293e-ce13-41b9-afa2-9a3fb3551c25
STEP: Updating secret s-test-opt-upd-6fb24702-74a7-4310-ba9a-6d9f7b70b5ef
STEP: Creating secret with name s-test-opt-create-ef447de3-0c0a-4767-b55a-a91151fd0b98
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:03:28.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4686" for this suite.
Feb 12 13:03:50.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:03:50.962: INFO: namespace secrets-4686 deletion completed in 22.130480336s

• [SLOW TEST:118.279 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:03:50.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9375
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 13:03:51.032: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 13:04:33.337: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9375 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:04:33.337: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:04:33.430007       8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Create stream
I0212 13:04:33.430115       8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Stream added, broadcasting: 1
I0212 13:04:33.440751       8 log.go:172] (0xc0009da8f0) Reply frame received for 1
I0212 13:04:33.440823       8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Create stream
I0212 13:04:33.440837       8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Stream added, broadcasting: 3
I0212 13:04:33.445411       8 log.go:172] (0xc0009da8f0) Reply frame received for 3
I0212 13:04:33.445548       8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Create stream
I0212 13:04:33.445582       8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Stream added, broadcasting: 5
I0212 13:04:33.449335       8 log.go:172] (0xc0009da8f0) Reply frame received for 5
I0212 13:04:34.624707       8 log.go:172] (0xc0009da8f0) Data frame received for 3
I0212 13:04:34.624837       8 log.go:172] (0xc001c74aa0) (3) Data frame handling
I0212 13:04:34.624865       8 log.go:172] (0xc001c74aa0) (3) Data frame sent
I0212 13:04:34.782184       8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Stream removed, broadcasting: 3
I0212 13:04:34.782500       8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Stream removed, broadcasting: 5
I0212 13:04:34.782601       8 log.go:172] (0xc0009da8f0) Data frame received for 1
I0212 13:04:34.782633       8 log.go:172] (0xc0019e2780) (1) Data frame handling
I0212 13:04:34.782694       8 log.go:172] (0xc0019e2780) (1) Data frame sent
I0212 13:04:34.782729       8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Stream removed, broadcasting: 1
I0212 13:04:34.782772       8 log.go:172] (0xc0009da8f0) Go away received
I0212 13:04:34.783040       8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Stream removed, broadcasting: 1
I0212 13:04:34.783073       8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Stream removed, broadcasting: 3
I0212 13:04:34.783096       8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Stream removed, broadcasting: 5
Feb 12 13:04:34.783: INFO: Found all expected endpoints: [netserver-0]
Feb 12 13:04:34.792: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9375 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:04:34.792: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:04:34.847646       8 log.go:172] (0xc0000edc30) (0xc0025f4780) Create stream
I0212 13:04:34.847790       8 log.go:172] (0xc0000edc30) (0xc0025f4780) Stream added, broadcasting: 1
I0212 13:04:34.856430       8 log.go:172] (0xc0000edc30) Reply frame received for 1
I0212 13:04:34.856461       8 log.go:172] (0xc0000edc30) (0xc001c74b40) Create stream
I0212 13:04:34.856468       8 log.go:172] (0xc0000edc30) (0xc001c74b40) Stream added, broadcasting: 3
I0212 13:04:34.857613       8 log.go:172] (0xc0000edc30) Reply frame received for 3
I0212 13:04:34.857635       8 log.go:172] (0xc0000edc30) (0xc0019e2960) Create stream
I0212 13:04:34.857644       8 log.go:172] (0xc0000edc30) (0xc0019e2960) Stream added, broadcasting: 5
I0212 13:04:34.859164       8 log.go:172] (0xc0000edc30) Reply frame received for 5
I0212 13:04:35.974286       8 log.go:172] (0xc0000edc30) Data frame received for 3
I0212 13:04:35.974402       8 log.go:172] (0xc001c74b40) (3) Data frame handling
I0212 13:04:35.974422       8 log.go:172] (0xc001c74b40) (3) Data frame sent
I0212 13:04:36.148756       8 log.go:172] (0xc0000edc30) Data frame received for 1
I0212 13:04:36.148924       8 log.go:172] (0xc0025f4780) (1) Data frame handling
I0212 13:04:36.148967       8 log.go:172] (0xc0025f4780) (1) Data frame sent
I0212 13:04:36.149625       8 log.go:172] (0xc0000edc30) (0xc0025f4780) Stream removed, broadcasting: 1
I0212 13:04:36.150175       8 log.go:172] (0xc0000edc30) (0xc001c74b40) Stream removed, broadcasting: 3
I0212 13:04:36.150258       8 log.go:172] (0xc0000edc30) (0xc0019e2960) Stream removed, broadcasting: 5
I0212 13:04:36.150349       8 log.go:172] (0xc0000edc30) (0xc0025f4780) Stream removed, broadcasting: 1
I0212 13:04:36.150400       8 log.go:172] (0xc0000edc30) (0xc001c74b40) Stream removed, broadcasting: 3
I0212 13:04:36.150432       8 log.go:172] (0xc0000edc30) (0xc0019e2960) Stream removed, broadcasting: 5
Feb 12 13:04:36.151: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:04:36.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9375" for this suite.
Feb 12 13:04:58.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:04:58.306: INFO: namespace pod-network-test-9375 deletion completed in 22.145201231s

• [SLOW TEST:67.345 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:04:58.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9deb8080-9e8e-4486-837e-60264f66396b
STEP: Creating a pod to test consume configMaps
Feb 12 13:04:58.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65" in namespace "configmap-5659" to be "success or failure"
Feb 12 13:04:58.523: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 12.402976ms
Feb 12 13:05:00.536: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02497141s
Feb 12 13:05:02.548: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037258548s
Feb 12 13:05:04.565: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054207585s
Feb 12 13:05:06.577: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Running", Reason="", readiness=true. Elapsed: 8.066116278s
Feb 12 13:05:08.673: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162701889s
STEP: Saw pod success
Feb 12 13:05:08.674: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65" satisfied condition "success or failure"
Feb 12 13:05:08.695: INFO: Trying to get logs from node iruya-node pod pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65 container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:05:08.820: INFO: Waiting for pod pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65 to disappear
Feb 12 13:05:08.828: INFO: Pod pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:05:08.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5659" for this suite.
Feb 12 13:05:15.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:05:15.159: INFO: namespace configmap-5659 deletion completed in 6.323145249s

• [SLOW TEST:16.852 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:05:15.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 12 13:05:15.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7749'
Feb 12 13:05:16.165: INFO: stderr: ""
Feb 12 13:05:16.165: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 12 13:05:17.174: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:17.174: INFO: Found 0 / 1
Feb 12 13:05:18.178: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:18.178: INFO: Found 0 / 1
Feb 12 13:05:19.177: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:19.178: INFO: Found 0 / 1
Feb 12 13:05:20.203: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:20.203: INFO: Found 0 / 1
Feb 12 13:05:21.174: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:21.175: INFO: Found 0 / 1
Feb 12 13:05:22.180: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:22.180: INFO: Found 0 / 1
Feb 12 13:05:23.175: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:23.175: INFO: Found 1 / 1
Feb 12 13:05:23.175: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 12 13:05:23.180: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:05:23.180: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 12 13:05:23.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749'
Feb 12 13:05:23.416: INFO: stderr: ""
Feb 12 13:05:23.416: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 12 Feb 13:05:22.506 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 13:05:22.506 # Server started, Redis version 3.2.12\n1:M 12 Feb 13:05:22.506 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 12 13:05:23.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --tail=1'
Feb 12 13:05:23.651: INFO: stderr: ""
Feb 12 13:05:23.651: INFO: stdout: "1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 12 13:05:23.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --limit-bytes=1'
Feb 12 13:05:23.783: INFO: stderr: ""
Feb 12 13:05:23.783: INFO: stdout: " "
STEP: exposing timestamps
Feb 12 13:05:23.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --tail=1 --timestamps'
Feb 12 13:05:23.964: INFO: stderr: ""
Feb 12 13:05:23.964: INFO: stdout: "2020-02-12T13:05:22.508939388Z 1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 12 13:05:26.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --since=1s'
Feb 12 13:05:26.806: INFO: stderr: ""
Feb 12 13:05:26.806: INFO: stdout: ""
Feb 12 13:05:26.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --since=24h'
Feb 12 13:05:26.984: INFO: stderr: ""
Feb 12 13:05:26.985: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 12 Feb 13:05:22.506 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 13:05:22.506 # Server started, Redis version 3.2.12\n1:M 12 Feb 13:05:22.506 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 12 13:05:26.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7749'
Feb 12 13:05:27.145: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:05:27.145: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 12 13:05:27.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7749'
Feb 12 13:05:27.285: INFO: stderr: "No resources found.\n"
Feb 12 13:05:27.285: INFO: stdout: ""
Feb 12 13:05:27.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7749 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 13:05:27.449: INFO: stderr: ""
Feb 12 13:05:27.449: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:05:27.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7749" for this suite.
Feb 12 13:05:49.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:05:49.565: INFO: namespace kubectl-7749 deletion completed in 22.104374642s

• [SLOW TEST:34.404 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:05:49.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 13:05:49.704: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 12 13:05:49.768: INFO: Number of nodes with available pods: 0
Feb 12 13:05:49.768: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:52.046: INFO: Number of nodes with available pods: 0
Feb 12 13:05:52.046: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:53.485: INFO: Number of nodes with available pods: 0
Feb 12 13:05:53.485: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:53.864: INFO: Number of nodes with available pods: 0
Feb 12 13:05:53.864: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:54.791: INFO: Number of nodes with available pods: 0
Feb 12 13:05:54.791: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:57.641: INFO: Number of nodes with available pods: 0
Feb 12 13:05:57.641: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:58.295: INFO: Number of nodes with available pods: 0
Feb 12 13:05:58.296: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:58.780: INFO: Number of nodes with available pods: 0
Feb 12 13:05:58.780: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:05:59.778: INFO: Number of nodes with available pods: 0
Feb 12 13:05:59.778: INFO: Node iruya-node is running more than one daemon pod
Feb 12 13:06:00.777: INFO: Number of nodes with available pods: 2
Feb 12 13:06:00.777: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 12 13:06:00.852: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:00.852: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:01.920: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:01.921: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:02.919: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:02.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:03.916: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:03.916: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:04.917: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:04.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:05.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:05.918: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:06.920: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:06.920: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:06.920: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:07.917: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:07.917: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:07.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:08.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:08.919: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:08.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:09.919: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:09.919: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:09.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:10.925: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:10.925: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:10.925: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:11.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:11.918: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:11.918: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:12.917: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:12.917: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:12.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:13.936: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:13.936: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:13.936: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:14.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:14.918: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:14.918: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:15.915: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:15.915: INFO: Pod daemon-set-9jkjs is not available
Feb 12 13:06:15.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:16.924: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:16.924: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:17.915: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:17.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:18.921: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:18.921: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:19.929: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:19.929: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:20.925: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:20.925: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:21.915: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:21.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:22.922: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:22.922: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:24.056: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:24.057: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:24.923: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:24.923: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:25.953: INFO: Pod daemon-set-b556s is not available
Feb 12 13:06:25.954: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:26.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:29.371: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:29.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:30.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:31.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:32.916: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:32.916: INFO: Pod daemon-set-qktjb is not available
Feb 12 13:06:33.920: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:33.920: INFO: Pod daemon-set-qktjb is not available
Feb 12 13:06:34.920: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:34.920: INFO: Pod daemon-set-qktjb is not available
Feb 12 13:06:35.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:35.919: INFO: Pod daemon-set-qktjb is not available
Feb 12 13:06:36.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 13:06:36.917: INFO: Pod daemon-set-qktjb is not available
Feb 12 13:06:37.929: INFO: Pod daemon-set-n4jxx is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 12 13:06:37.958: INFO: Number of nodes with available pods: 1
Feb 12 13:06:37.958: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:38.988: INFO: Number of nodes with available pods: 1
Feb 12 13:06:38.988: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:39.977: INFO: Number of nodes with available pods: 1
Feb 12 13:06:39.977: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:40.996: INFO: Number of nodes with available pods: 1
Feb 12 13:06:40.996: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:42.673: INFO: Number of nodes with available pods: 1
Feb 12 13:06:42.673: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:43.081: INFO: Number of nodes with available pods: 1
Feb 12 13:06:43.081: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:44.039: INFO: Number of nodes with available pods: 1
Feb 12 13:06:44.039: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 13:06:44.975: INFO: Number of nodes with available pods: 2
Feb 12 13:06:44.975: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-923, will wait for the garbage collector to delete the pods
Feb 12 13:06:45.066: INFO: Deleting DaemonSet.extensions daemon-set took: 10.942501ms
Feb 12 13:06:45.367: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.712101ms
Feb 12 13:06:52.583: INFO: Number of nodes with available pods: 0
Feb 12 13:06:52.583: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 13:06:52.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-923/daemonsets","resourceVersion":"24070618"},"items":null}

Feb 12 13:06:52.594: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-923/pods","resourceVersion":"24070618"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:06:52.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-923" for this suite.
Feb 12 13:07:00.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:07:00.816: INFO: namespace daemonsets-923 deletion completed in 8.198253566s

• [SLOW TEST:71.250 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:07:00.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-d1d71099-5f68-44f3-a554-a1f261d46402 in namespace container-probe-5012
Feb 12 13:07:08.946: INFO: Started pod liveness-d1d71099-5f68-44f3-a554-a1f261d46402 in namespace container-probe-5012
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 13:07:08.952: INFO: Initial restart count of pod liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is 0
Feb 12 13:07:25.165: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 1 (16.21245024s elapsed)
Feb 12 13:07:45.306: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 2 (36.353921561s elapsed)
Feb 12 13:08:07.437: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 3 (58.484740246s elapsed)
Feb 12 13:08:27.560: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 4 (1m18.608089208s elapsed)
Feb 12 13:09:25.875: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 5 (2m16.922341974s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:09:25.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5012" for this suite.
Feb 12 13:09:31.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:09:32.122: INFO: namespace container-probe-5012 deletion completed in 6.169465427s

• [SLOW TEST:151.305 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:09:32.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0212 13:09:49.112879       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 13:09:49.113: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:09:49.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1729" for this suite.
Feb 12 13:10:02.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:10:03.928: INFO: namespace gc-1729 deletion completed in 13.662657695s

• [SLOW TEST:31.805 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:10:03.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-7ca1a334-fc90-41d7-aa18-a7316df482a7
STEP: Creating a pod to test consume secrets
Feb 12 13:10:06.218: INFO: Waiting up to 5m0s for pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88" in namespace "secrets-8790" to be "success or failure"
Feb 12 13:10:06.892: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 674.172344ms
Feb 12 13:10:08.905: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.686377327s
Feb 12 13:10:10.915: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.697079983s
Feb 12 13:10:12.928: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.709449161s
Feb 12 13:10:14.940: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721286141s
Feb 12 13:10:16.948: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.729434278s
Feb 12 13:10:18.962: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743746197s
Feb 12 13:10:20.969: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.750624251s
STEP: Saw pod success
Feb 12 13:10:20.969: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88" satisfied condition "success or failure"
Feb 12 13:10:20.973: INFO: Trying to get logs from node iruya-node pod pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88 container secret-env-test: 
STEP: delete the pod
Feb 12 13:10:21.034: INFO: Waiting for pod pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88 to disappear
Feb 12 13:10:21.091: INFO: Pod pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:10:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8790" for this suite.
Feb 12 13:10:27.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:10:27.244: INFO: namespace secrets-8790 deletion completed in 6.147963898s

• [SLOW TEST:23.315 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:10:27.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 12 13:10:27.429: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071139,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 12 13:10:27.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071140,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 12 13:10:27.430: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071141,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 12 13:10:37.529: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071158,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 12 13:10:37.529: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071159,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 12 13:10:37.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071160,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:10:37.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3118" for this suite.
Feb 12 13:10:43.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:10:43.681: INFO: namespace watch-3118 deletion completed in 6.128106604s

• [SLOW TEST:16.437 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:10:43.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 13:11:00.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.173: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.182: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.191: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.197: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.208: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.225: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.252: INFO: Unable to read 10.109.141.242_udp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.258: INFO: Unable to read 10.109.141.242_tcp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.264: INFO: Unable to read jessie_udp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.271: INFO: Unable to read jessie_tcp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.276: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.282: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.288: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.296: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.302: INFO: Unable to read jessie_udp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.313: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.325: INFO: Unable to read 10.109.141.242_udp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.333: INFO: Unable to read 10.109.141.242_tcp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c)
Feb 12 13:11:00.333: INFO: Lookups using dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c failed for: [wheezy_udp@dns-test-service.dns-6133.svc.cluster.local wheezy_tcp@dns-test-service.dns-6133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.141.242_udp@PTR 10.109.141.242_tcp@PTR jessie_udp@dns-test-service.dns-6133.svc.cluster.local jessie_tcp@dns-test-service.dns-6133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.141.242_udp@PTR 10.109.141.242_tcp@PTR]

Feb 12 13:11:05.539: INFO: DNS probes using dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:11:05.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6133" for this suite.
Feb 12 13:11:11.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:11:12.104: INFO: namespace dns-6133 deletion completed in 6.250785068s

• [SLOW TEST:28.422 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:11:12.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 12 13:11:22.846: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1131 pod-service-account-1e204a4d-525c-4a8c-a62d-fa33498256dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 12 13:11:26.718: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1131 pod-service-account-1e204a4d-525c-4a8c-a62d-fa33498256dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 12 13:11:27.162: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1131 pod-service-account-1e204a4d-525c-4a8c-a62d-fa33498256dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:11:27.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1131" for this suite.
Feb 12 13:11:33.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:11:33.739: INFO: namespace svcaccounts-1131 deletion completed in 6.149887689s

• [SLOW TEST:21.634 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:11:33.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 13:11:33.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1807'
Feb 12 13:11:34.105: INFO: stderr: ""
Feb 12 13:11:34.105: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 12 13:11:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1807'
Feb 12 13:11:40.753: INFO: stderr: ""
Feb 12 13:11:40.753: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:11:40.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1807" for this suite.
Feb 12 13:11:46.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:11:46.941: INFO: namespace kubectl-1807 deletion completed in 6.1727256s

• [SLOW TEST:13.202 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:11:46.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1908
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 13:11:46.990: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 13:12:23.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1908 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:12:23.313: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:12:23.415260       8 log.go:172] (0xc0000ed760) (0xc0000ff400) Create stream
I0212 13:12:23.415407       8 log.go:172] (0xc0000ed760) (0xc0000ff400) Stream added, broadcasting: 1
I0212 13:12:23.421618       8 log.go:172] (0xc0000ed760) Reply frame received for 1
I0212 13:12:23.421648       8 log.go:172] (0xc0000ed760) (0xc0009785a0) Create stream
I0212 13:12:23.421655       8 log.go:172] (0xc0000ed760) (0xc0009785a0) Stream added, broadcasting: 3
I0212 13:12:23.423410       8 log.go:172] (0xc0000ed760) Reply frame received for 3
I0212 13:12:23.423448       8 log.go:172] (0xc0000ed760) (0xc000978820) Create stream
I0212 13:12:23.423464       8 log.go:172] (0xc0000ed760) (0xc000978820) Stream added, broadcasting: 5
I0212 13:12:23.425434       8 log.go:172] (0xc0000ed760) Reply frame received for 5
I0212 13:12:23.703855       8 log.go:172] (0xc0000ed760) Data frame received for 3
I0212 13:12:23.703949       8 log.go:172] (0xc0009785a0) (3) Data frame handling
I0212 13:12:23.703988       8 log.go:172] (0xc0009785a0) (3) Data frame sent
I0212 13:12:23.831047       8 log.go:172] (0xc0000ed760) Data frame received for 1
I0212 13:12:23.831126       8 log.go:172] (0xc0000ed760) (0xc0009785a0) Stream removed, broadcasting: 3
I0212 13:12:23.831181       8 log.go:172] (0xc0000ff400) (1) Data frame handling
I0212 13:12:23.831195       8 log.go:172] (0xc0000ff400) (1) Data frame sent
I0212 13:12:23.831206       8 log.go:172] (0xc0000ed760) (0xc0000ff400) Stream removed, broadcasting: 1
I0212 13:12:23.831277       8 log.go:172] (0xc0000ed760) (0xc000978820) Stream removed, broadcasting: 5
I0212 13:12:23.831351       8 log.go:172] (0xc0000ed760) Go away received
I0212 13:12:23.831398       8 log.go:172] (0xc0000ed760) (0xc0000ff400) Stream removed, broadcasting: 1
I0212 13:12:23.831413       8 log.go:172] (0xc0000ed760) (0xc0009785a0) Stream removed, broadcasting: 3
I0212 13:12:23.831419       8 log.go:172] (0xc0000ed760) (0xc000978820) Stream removed, broadcasting: 5
Feb 12 13:12:23.831: INFO: Found all expected endpoints: [netserver-0]
Feb 12 13:12:23.840: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1908 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:12:23.840: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:12:23.920797       8 log.go:172] (0xc001b313f0) (0xc002b80320) Create stream
I0212 13:12:23.921125       8 log.go:172] (0xc001b313f0) (0xc002b80320) Stream added, broadcasting: 1
I0212 13:12:23.932551       8 log.go:172] (0xc001b313f0) Reply frame received for 1
I0212 13:12:23.932633       8 log.go:172] (0xc001b313f0) (0xc0009788c0) Create stream
I0212 13:12:23.932647       8 log.go:172] (0xc001b313f0) (0xc0009788c0) Stream added, broadcasting: 3
I0212 13:12:23.936414       8 log.go:172] (0xc001b313f0) Reply frame received for 3
I0212 13:12:23.936445       8 log.go:172] (0xc001b313f0) (0xc000978aa0) Create stream
I0212 13:12:23.936455       8 log.go:172] (0xc001b313f0) (0xc000978aa0) Stream added, broadcasting: 5
I0212 13:12:23.943203       8 log.go:172] (0xc001b313f0) Reply frame received for 5
I0212 13:12:24.140598       8 log.go:172] (0xc001b313f0) Data frame received for 3
I0212 13:12:24.140648       8 log.go:172] (0xc0009788c0) (3) Data frame handling
I0212 13:12:24.140660       8 log.go:172] (0xc0009788c0) (3) Data frame sent
I0212 13:12:24.241486       8 log.go:172] (0xc001b313f0) Data frame received for 1
I0212 13:12:24.241535       8 log.go:172] (0xc001b313f0) (0xc0009788c0) Stream removed, broadcasting: 3
I0212 13:12:24.241594       8 log.go:172] (0xc002b80320) (1) Data frame handling
I0212 13:12:24.241610       8 log.go:172] (0xc002b80320) (1) Data frame sent
I0212 13:12:24.241618       8 log.go:172] (0xc001b313f0) (0xc002b80320) Stream removed, broadcasting: 1
I0212 13:12:24.241765       8 log.go:172] (0xc001b313f0) (0xc000978aa0) Stream removed, broadcasting: 5
I0212 13:12:24.241816       8 log.go:172] (0xc001b313f0) (0xc002b80320) Stream removed, broadcasting: 1
I0212 13:12:24.241829       8 log.go:172] (0xc001b313f0) (0xc0009788c0) Stream removed, broadcasting: 3
I0212 13:12:24.241836       8 log.go:172] (0xc001b313f0) (0xc000978aa0) Stream removed, broadcasting: 5
I0212 13:12:24.242002       8 log.go:172] (0xc001b313f0) Go away received
Feb 12 13:12:24.242: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:12:24.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1908" for this suite.
Feb 12 13:12:48.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:12:48.370: INFO: namespace pod-network-test-1908 deletion completed in 24.120861001s

• [SLOW TEST:61.429 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:12:48.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6112/configmap-test-464a7152-edf7-409e-88bc-eb3487892076
STEP: Creating a pod to test consume configMaps
Feb 12 13:12:48.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c" in namespace "configmap-6112" to be "success or failure"
Feb 12 13:12:48.607: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96896ms
Feb 12 13:12:50.615: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016473491s
Feb 12 13:12:52.634: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03613732s
Feb 12 13:12:54.642: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043566666s
Feb 12 13:12:56.656: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057515892s
Feb 12 13:12:58.680: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081903027s
Feb 12 13:13:00.688: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.089820879s
Feb 12 13:13:02.713: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.114520941s
STEP: Saw pod success
Feb 12 13:13:02.713: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c" satisfied condition "success or failure"
Feb 12 13:13:02.722: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c container env-test: 
STEP: delete the pod
Feb 12 13:13:02.817: INFO: Waiting for pod pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c to disappear
Feb 12 13:13:02.905: INFO: Pod pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:13:02.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6112" for this suite.
Feb 12 13:13:08.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:13:09.076: INFO: namespace configmap-6112 deletion completed in 6.161347328s

• [SLOW TEST:20.706 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:13:09.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e62903bc-fa5a-4e43-8b1f-6442f18fc782
STEP: Creating a pod to test consume secrets
Feb 12 13:13:10.310: INFO: Waiting up to 5m0s for pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b" in namespace "secrets-5626" to be "success or failure"
Feb 12 13:13:10.337: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.806383ms
Feb 12 13:13:12.350: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039233742s
Feb 12 13:13:14.362: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050372683s
Feb 12 13:13:16.440: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128975449s
Feb 12 13:13:18.452: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14113389s
Feb 12 13:13:20.469: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157829393s
Feb 12 13:13:22.482: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.170539365s
Feb 12 13:13:24.492: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.1805921s
STEP: Saw pod success
Feb 12 13:13:24.492: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b" satisfied condition "success or failure"
Feb 12 13:13:24.497: INFO: Trying to get logs from node iruya-node pod pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b container secret-volume-test: 
STEP: delete the pod
Feb 12 13:13:25.018: INFO: Waiting for pod pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b to disappear
Feb 12 13:13:25.029: INFO: Pod pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:13:25.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5626" for this suite.
Feb 12 13:13:31.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:13:31.150: INFO: namespace secrets-5626 deletion completed in 6.117368072s
STEP: Destroying namespace "secret-namespace-5525" for this suite.
Feb 12 13:13:37.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:13:37.279: INFO: namespace secret-namespace-5525 deletion completed in 6.128998339s

• [SLOW TEST:28.203 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:13:37.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 13:13:37.485: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:13:38.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1949" for this suite.
Feb 12 13:13:44.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:13:44.857: INFO: namespace custom-resource-definition-1949 deletion completed in 6.171890708s

• [SLOW TEST:7.577 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:13:44.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-faa378e3-3104-48d2-9ba5-9924c74ea759
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:13:45.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4343" for this suite.
Feb 12 13:13:52.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:13:52.558: INFO: namespace configmap-4343 deletion completed in 6.601563964s

• [SLOW TEST:7.700 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:13:52.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 12 13:13:52.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1839'
Feb 12 13:13:53.104: INFO: stderr: ""
Feb 12 13:13:53.104: INFO: stdout: "pod/pause created\n"
Feb 12 13:13:53.104: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 12 13:13:53.104: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1839" to be "running and ready"
Feb 12 13:13:53.203: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 98.622774ms
Feb 12 13:13:55.209: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10490732s
Feb 12 13:13:57.218: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113573243s
Feb 12 13:13:59.310: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205959542s
Feb 12 13:14:01.319: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214238414s
Feb 12 13:14:03.327: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22242574s
Feb 12 13:14:05.336: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231449914s
Feb 12 13:14:07.371: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.266613148s
Feb 12 13:14:09.381: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 16.276474918s
Feb 12 13:14:09.381: INFO: Pod "pause" satisfied condition "running and ready"
Feb 12 13:14:09.381: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 12 13:14:09.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1839'
Feb 12 13:14:09.548: INFO: stderr: ""
Feb 12 13:14:09.548: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 12 13:14:09.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1839'
Feb 12 13:14:09.674: INFO: stderr: ""
Feb 12 13:14:09.674: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          16s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 12 13:14:09.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1839'
Feb 12 13:14:09.955: INFO: stderr: ""
Feb 12 13:14:09.955: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 12 13:14:09.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1839'
Feb 12 13:14:10.206: INFO: stderr: ""
Feb 12 13:14:10.207: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          17s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 12 13:14:10.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1839'
Feb 12 13:14:10.426: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:14:10.426: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 12 13:14:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1839'
Feb 12 13:14:10.592: INFO: stderr: "No resources found.\n"
Feb 12 13:14:10.592: INFO: stdout: ""
Feb 12 13:14:10.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1839 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 13:14:10.698: INFO: stderr: ""
Feb 12 13:14:10.698: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:14:10.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1839" for this suite.
Feb 12 13:14:19.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:14:19.688: INFO: namespace kubectl-1839 deletion completed in 8.977252292s

• [SLOW TEST:27.130 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:14:19.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:15:26.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8119" for this suite.
Feb 12 13:15:32.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:15:33.059: INFO: namespace container-runtime-8119 deletion completed in 6.114435451s

• [SLOW TEST:73.371 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:15:33.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 12 13:15:43.139: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6af2870a-7ec0-48e0-bfcf-07f6fa3f3e01,GenerateName:,Namespace:events-2146,SelfLink:/api/v1/namespaces/events-2146/pods/send-events-6af2870a-7ec0-48e0-bfcf-07f6fa3f3e01,UID:b049bd56-2162-4b6c-8175-fa8712f394d0,ResourceVersion:24071928,Generation:0,CreationTimestamp:2020-02-12 13:15:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 96805034,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pktnz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pktnz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pktnz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dca330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dca3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-12 13:15:33 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-12 13:15:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://550fea34cbc17e0a392eaf5d0a7a7d1afe09df77eb5d5d8f78f75610b4a1cc05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 12 13:15:45.146: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 12 13:15:47.161: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:15:47.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2146" for this suite.
Feb 12 13:16:25.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:16:25.370: INFO: namespace events-2146 deletion completed in 38.186043346s

• [SLOW TEST:52.310 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:16:25.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 12 13:16:25.563: INFO: Waiting up to 5m0s for pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06" in namespace "emptydir-8580" to be "success or failure"
Feb 12 13:16:25.573: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 9.791268ms
Feb 12 13:16:27.581: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018019865s
Feb 12 13:16:29.596: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032420377s
Feb 12 13:16:31.605: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042014768s
Feb 12 13:16:33.617: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053435591s
Feb 12 13:16:35.630: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Running", Reason="", readiness=true. Elapsed: 10.066561827s
Feb 12 13:16:37.646: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.082312333s
STEP: Saw pod success
Feb 12 13:16:37.646: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06" satisfied condition "success or failure"
Feb 12 13:16:37.650: INFO: Trying to get logs from node iruya-node pod pod-60f16972-4cb1-4779-adca-a6b6bbcffd06 container test-container: 
STEP: delete the pod
Feb 12 13:16:37.811: INFO: Waiting for pod pod-60f16972-4cb1-4779-adca-a6b6bbcffd06 to disappear
Feb 12 13:16:37.818: INFO: Pod pod-60f16972-4cb1-4779-adca-a6b6bbcffd06 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:16:37.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8580" for this suite.
Feb 12 13:16:43.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:16:43.987: INFO: namespace emptydir-8580 deletion completed in 6.158892069s

• [SLOW TEST:18.617 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:16:43.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 12 13:16:44.060: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:17:01.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5111" for this suite.
Feb 12 13:17:07.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:17:07.734: INFO: namespace pods-5111 deletion completed in 6.206802883s

• [SLOW TEST:23.745 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:17:07.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-95565cf8-bd23-4a3f-badc-9d6bdf5def76
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:17:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8449" for this suite.
Feb 12 13:17:43.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:17:44.076: INFO: namespace configmap-8449 deletion completed in 22.106164699s

• [SLOW TEST:36.342 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:17:44.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 12 13:17:53.507: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:17:53.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9999" for this suite.
Feb 12 13:17:59.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:17:59.760: INFO: namespace container-runtime-9999 deletion completed in 6.135654061s

• [SLOW TEST:15.683 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:17:59.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 12 13:17:59.833: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:18:20.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6940" for this suite.
Feb 12 13:18:42.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:18:42.975: INFO: namespace init-container-6940 deletion completed in 22.136559656s

• [SLOW TEST:43.215 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:18:42.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-tpnr
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 13:18:43.063: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tpnr" in namespace "subpath-9775" to be "success or failure"
Feb 12 13:18:43.073: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.919039ms
Feb 12 13:18:45.084: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020904048s
Feb 12 13:18:47.090: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027026348s
Feb 12 13:18:49.096: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033147193s
Feb 12 13:18:51.103: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039744497s
Feb 12 13:18:53.111: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 10.047778418s
Feb 12 13:18:55.119: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 12.05639394s
Feb 12 13:18:57.128: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 14.064893331s
Feb 12 13:18:59.138: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 16.074617518s
Feb 12 13:19:01.144: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 18.080782719s
Feb 12 13:19:03.155: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 20.09158794s
Feb 12 13:19:05.162: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 22.099122234s
Feb 12 13:19:07.172: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 24.109357337s
Feb 12 13:19:09.182: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 26.119141897s
Feb 12 13:19:11.193: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 28.130027118s
Feb 12 13:19:13.201: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 30.138218529s
Feb 12 13:19:15.211: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.147525699s
STEP: Saw pod success
Feb 12 13:19:15.211: INFO: Pod "pod-subpath-test-secret-tpnr" satisfied condition "success or failure"
Feb 12 13:19:15.216: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-tpnr container test-container-subpath-secret-tpnr: 
STEP: delete the pod
Feb 12 13:19:15.336: INFO: Waiting for pod pod-subpath-test-secret-tpnr to disappear
Feb 12 13:19:15.344: INFO: Pod pod-subpath-test-secret-tpnr no longer exists
STEP: Deleting pod pod-subpath-test-secret-tpnr
Feb 12 13:19:15.344: INFO: Deleting pod "pod-subpath-test-secret-tpnr" in namespace "subpath-9775"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:19:15.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9775" for this suite.
Feb 12 13:19:21.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:19:21.517: INFO: namespace subpath-9775 deletion completed in 6.164396418s

• [SLOW TEST:38.541 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:19:21.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 12 13:19:21.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919'
Feb 12 13:19:21.943: INFO: stderr: ""
Feb 12 13:19:21.944: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 13:19:21.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:22.090: INFO: stderr: ""
Feb 12 13:19:22.090: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b "
Feb 12 13:19:22.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:22.308: INFO: stderr: ""
Feb 12 13:19:22.308: INFO: stdout: ""
Feb 12 13:19:22.308: INFO: update-demo-nautilus-27sgz is created but not running
Feb 12 13:19:27.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:27.429: INFO: stderr: ""
Feb 12 13:19:27.429: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b "
Feb 12 13:19:27.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:28.956: INFO: stderr: ""
Feb 12 13:19:28.956: INFO: stdout: ""
Feb 12 13:19:28.956: INFO: update-demo-nautilus-27sgz is created but not running
Feb 12 13:19:33.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:34.199: INFO: stderr: ""
Feb 12 13:19:34.199: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b "
Feb 12 13:19:34.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:34.302: INFO: stderr: ""
Feb 12 13:19:34.303: INFO: stdout: "true"
Feb 12 13:19:34.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:34.489: INFO: stderr: ""
Feb 12 13:19:34.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:19:34.489: INFO: validating pod update-demo-nautilus-27sgz
Feb 12 13:19:34.511: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:19:34.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:19:34.512: INFO: update-demo-nautilus-27sgz is verified up and running
Feb 12 13:19:34.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:34.655: INFO: stderr: ""
Feb 12 13:19:34.655: INFO: stdout: "true"
Feb 12 13:19:34.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:34.777: INFO: stderr: ""
Feb 12 13:19:34.777: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:19:34.777: INFO: validating pod update-demo-nautilus-dn62b
Feb 12 13:19:34.782: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:19:34.782: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:19:34.782: INFO: update-demo-nautilus-dn62b is verified up and running
STEP: scaling down the replication controller
Feb 12 13:19:34.784: INFO: scanned /root for discovery docs: 
Feb 12 13:19:34.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-919'
Feb 12 13:19:35.953: INFO: stderr: ""
Feb 12 13:19:35.953: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 13:19:35.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:36.139: INFO: stderr: ""
Feb 12 13:19:36.140: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 12 13:19:41.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:41.281: INFO: stderr: ""
Feb 12 13:19:41.281: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 12 13:19:46.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:46.528: INFO: stderr: ""
Feb 12 13:19:46.528: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 12 13:19:51.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:51.715: INFO: stderr: ""
Feb 12 13:19:51.715: INFO: stdout: "update-demo-nautilus-dn62b "
Feb 12 13:19:51.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:51.847: INFO: stderr: ""
Feb 12 13:19:51.847: INFO: stdout: "true"
Feb 12 13:19:51.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:51.986: INFO: stderr: ""
Feb 12 13:19:51.986: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:19:51.986: INFO: validating pod update-demo-nautilus-dn62b
Feb 12 13:19:51.992: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:19:51.992: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:19:51.992: INFO: update-demo-nautilus-dn62b is verified up and running
STEP: scaling up the replication controller
Feb 12 13:19:51.994: INFO: scanned /root for discovery docs: 
Feb 12 13:19:51.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-919'
Feb 12 13:19:53.200: INFO: stderr: ""
Feb 12 13:19:53.200: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 13:19:53.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:53.493: INFO: stderr: ""
Feb 12 13:19:53.493: INFO: stdout: "update-demo-nautilus-89mfq update-demo-nautilus-dn62b "
Feb 12 13:19:53.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:53.599: INFO: stderr: ""
Feb 12 13:19:53.599: INFO: stdout: ""
Feb 12 13:19:53.599: INFO: update-demo-nautilus-89mfq is created but not running
Feb 12 13:19:58.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:19:58.752: INFO: stderr: ""
Feb 12 13:19:58.752: INFO: stdout: "update-demo-nautilus-89mfq update-demo-nautilus-dn62b "
Feb 12 13:19:58.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:19:58.906: INFO: stderr: ""
Feb 12 13:19:58.906: INFO: stdout: ""
Feb 12 13:19:58.907: INFO: update-demo-nautilus-89mfq is created but not running
Feb 12 13:20:03.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919'
Feb 12 13:20:04.173: INFO: stderr: ""
Feb 12 13:20:04.173: INFO: stdout: "update-demo-nautilus-89mfq update-demo-nautilus-dn62b "
Feb 12 13:20:04.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:20:04.270: INFO: stderr: ""
Feb 12 13:20:04.271: INFO: stdout: "true"
Feb 12 13:20:04.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:20:04.365: INFO: stderr: ""
Feb 12 13:20:04.365: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:20:04.365: INFO: validating pod update-demo-nautilus-89mfq
Feb 12 13:20:04.377: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:20:04.377: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:20:04.377: INFO: update-demo-nautilus-89mfq is verified up and running
Feb 12 13:20:04.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:20:04.465: INFO: stderr: ""
Feb 12 13:20:04.465: INFO: stdout: "true"
Feb 12 13:20:04.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919'
Feb 12 13:20:04.599: INFO: stderr: ""
Feb 12 13:20:04.599: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 13:20:04.599: INFO: validating pod update-demo-nautilus-dn62b
Feb 12 13:20:04.609: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 13:20:04.609: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 13:20:04.609: INFO: update-demo-nautilus-dn62b is verified up and running
STEP: using delete to clean up resources
Feb 12 13:20:04.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919'
Feb 12 13:20:04.749: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:20:04.749: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 12 13:20:04.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-919'
Feb 12 13:20:04.876: INFO: stderr: "No resources found.\n"
Feb 12 13:20:04.876: INFO: stdout: ""
Feb 12 13:20:04.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-919 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 13:20:05.022: INFO: stderr: ""
Feb 12 13:20:05.023: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:20:05.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-919" for this suite.
Feb 12 13:20:27.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:20:27.209: INFO: namespace kubectl-919 deletion completed in 22.144607857s

• [SLOW TEST:65.692 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:20:27.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:20:27.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336" in namespace "projected-9737" to be "success or failure"
Feb 12 13:20:27.294: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 19.837452ms
Feb 12 13:20:29.303: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029164506s
Feb 12 13:20:31.313: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038899181s
Feb 12 13:20:33.322: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048439464s
Feb 12 13:20:35.330: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055893242s
Feb 12 13:20:37.340: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065695023s
STEP: Saw pod success
Feb 12 13:20:37.340: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336" satisfied condition "success or failure"
Feb 12 13:20:37.344: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336 container client-container: 
STEP: delete the pod
Feb 12 13:20:37.418: INFO: Waiting for pod downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336 to disappear
Feb 12 13:20:37.435: INFO: Pod downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:20:37.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9737" for this suite.
Feb 12 13:20:45.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:20:45.759: INFO: namespace projected-9737 deletion completed in 8.315879791s

• [SLOW TEST:18.549 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:20:45.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 12 13:20:46.018: INFO: Waiting up to 5m0s for pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88" in namespace "emptydir-1557" to be "success or failure"
Feb 12 13:20:46.040: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 21.786878ms
Feb 12 13:20:48.050: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031683712s
Feb 12 13:20:50.061: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042862921s
Feb 12 13:20:52.069: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050164536s
Feb 12 13:20:54.078: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Running", Reason="", readiness=true. Elapsed: 8.059981852s
Feb 12 13:20:56.086: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067804946s
STEP: Saw pod success
Feb 12 13:20:56.086: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88" satisfied condition "success or failure"
Feb 12 13:20:56.092: INFO: Trying to get logs from node iruya-node pod pod-6d290536-f01b-4e4b-ab43-e1988dd95b88 container test-container: 
STEP: delete the pod
Feb 12 13:20:56.207: INFO: Waiting for pod pod-6d290536-f01b-4e4b-ab43-e1988dd95b88 to disappear
Feb 12 13:20:56.215: INFO: Pod pod-6d290536-f01b-4e4b-ab43-e1988dd95b88 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:20:56.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1557" for this suite.
Feb 12 13:21:02.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:21:02.334: INFO: namespace emptydir-1557 deletion completed in 6.110955862s

• [SLOW TEST:16.575 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:21:02.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 12 13:21:02.456: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072666,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 12 13:21:02.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072668,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 12 13:21:02.479: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072669,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 12 13:21:02.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072670,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:21:02.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9485" for this suite.
Feb 12 13:21:08.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:21:08.680: INFO: namespace watch-9485 deletion completed in 6.194676695s

• [SLOW TEST:6.346 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:21:08.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-7fx4t in namespace proxy-7982
I0212 13:21:08.800925       8 runners.go:180] Created replication controller with name: proxy-service-7fx4t, namespace: proxy-7982, replica count: 1
I0212 13:21:09.851739       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:10.852119       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:11.852484       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:12.852959       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:13.853411       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:14.853742       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:15.854064       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:16.854789       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 13:21:17.855586       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 13:21:18.856046       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 13:21:19.856656       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 13:21:20.857246       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 13:21:21.857894       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 13:21:22.858675       8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 12 13:21:22.873: INFO: setup took 14.129266259s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 12 13:21:22.958: INFO: (0) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 84.413049ms)
Feb 12 13:21:22.959: INFO: (0) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 85.215119ms)
Feb 12 13:21:22.961: INFO: (0) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 86.58373ms)
Feb 12 13:21:22.961: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 87.469865ms)
Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 90.729219ms)
Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 91.446911ms)
Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 91.351239ms)
Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 91.601583ms)
Feb 12 13:21:22.966: INFO: (0) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 91.516793ms)
Feb 12 13:21:22.966: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 91.550161ms)
Feb 12 13:21:22.968: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 94.126632ms)
Feb 12 13:21:23.038: INFO: (0) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 163.787829ms)
Feb 12 13:21:23.038: INFO: (0) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 41.470526ms)
Feb 12 13:21:23.080: INFO: (1) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 41.232707ms)
Feb 12 13:21:23.080: INFO: (1) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 41.561818ms)
Feb 12 13:21:23.090: INFO: (1) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 51.18929ms)
Feb 12 13:21:23.090: INFO: (1) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 51.261313ms)
Feb 12 13:21:23.090: INFO: (1) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 51.457454ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 58.207111ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 57.840966ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 58.056955ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 58.405681ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 58.033682ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 57.993516ms)
Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 58.375341ms)
Feb 12 13:21:23.117: INFO: (2) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.688549ms)
Feb 12 13:21:23.117: INFO: (2) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 19.595629ms)
Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 19.983311ms)
Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 19.834641ms)
Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 20.722569ms)
Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 26.527834ms)
Feb 12 13:21:23.125: INFO: (2) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 27.580139ms)
Feb 12 13:21:23.125: INFO: (2) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 27.73865ms)
Feb 12 13:21:23.125: INFO: (2) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 27.593908ms)
Feb 12 13:21:23.126: INFO: (2) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 28.577447ms)
Feb 12 13:21:23.143: INFO: (3) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 16.568448ms)
Feb 12 13:21:23.143: INFO: (3) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 17.095156ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.834571ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.985109ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.920678ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 20.269128ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.987024ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 20.012919ms)
Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 20.130113ms)
Feb 12 13:21:23.148: INFO: (3) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 21.663098ms)
Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 22.79416ms)
Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.802703ms)
Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 22.630257ms)
Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 22.976595ms)
Feb 12 13:21:23.150: INFO: (3) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 24.024632ms)
Feb 12 13:21:23.163: INFO: (4) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 12.130389ms)
Feb 12 13:21:23.163: INFO: (4) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 12.652403ms)
Feb 12 13:21:23.163: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 12.607966ms)
Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 13.012625ms)
Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 13.222711ms)
Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.081505ms)
Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.345492ms)
Feb 12 13:21:23.166: INFO: (4) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 14.943604ms)
Feb 12 13:21:23.166: INFO: (4) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 15.344819ms)
Feb 12 13:21:23.168: INFO: (4) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 9.977051ms)
Feb 12 13:21:23.180: INFO: (5) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 10.184007ms)
Feb 12 13:21:23.180: INFO: (5) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 10.485131ms)
Feb 12 13:21:23.181: INFO: (5) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 10.737991ms)
Feb 12 13:21:23.181: INFO: (5) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 10.816837ms)
Feb 12 13:21:23.181: INFO: (5) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 10.818813ms)
Feb 12 13:21:23.182: INFO: (5) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 12.319796ms)
Feb 12 13:21:23.183: INFO: (5) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 13.234461ms)
Feb 12 13:21:23.183: INFO: (5) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 13.287135ms)
Feb 12 13:21:23.183: INFO: (5) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 9.545554ms)
Feb 12 13:21:23.196: INFO: (6) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.235663ms)
Feb 12 13:21:23.198: INFO: (6) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: ... (200; 11.398223ms)
Feb 12 13:21:23.198: INFO: (6) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 11.630056ms)
Feb 12 13:21:23.200: INFO: (6) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 13.353221ms)
Feb 12 13:21:23.200: INFO: (6) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 13.870862ms)
Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 14.021789ms)
Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 14.257559ms)
Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 13.982431ms)
Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 14.34601ms)
Feb 12 13:21:23.202: INFO: (6) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 15.349349ms)
Feb 12 13:21:23.211: INFO: (7) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 8.915105ms)
Feb 12 13:21:23.211: INFO: (7) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.096251ms)
Feb 12 13:21:23.212: INFO: (7) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 10.3939ms)
Feb 12 13:21:23.214: INFO: (7) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 12.255376ms)
Feb 12 13:21:23.215: INFO: (7) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 12.600245ms)
Feb 12 13:21:23.215: INFO: (7) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 13.166803ms)
Feb 12 13:21:23.215: INFO: (7) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: ... (200; 13.595272ms)
Feb 12 13:21:23.216: INFO: (7) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 13.933676ms)
Feb 12 13:21:23.219: INFO: (7) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 16.833315ms)
Feb 12 13:21:23.219: INFO: (7) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 17.195839ms)
Feb 12 13:21:23.221: INFO: (7) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 18.963226ms)
Feb 12 13:21:23.222: INFO: (7) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 19.594124ms)
Feb 12 13:21:23.222: INFO: (7) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.632762ms)
Feb 12 13:21:23.222: INFO: (7) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 20.034521ms)
Feb 12 13:21:23.235: INFO: (8) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 12.299012ms)
Feb 12 13:21:23.238: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 15.743016ms)
Feb 12 13:21:23.239: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 16.922122ms)
Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 18.154857ms)
Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 18.455563ms)
Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: ... (200; 18.525844ms)
Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 18.787352ms)
Feb 12 13:21:23.244: INFO: (8) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 21.743917ms)
Feb 12 13:21:23.245: INFO: (8) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 22.141452ms)
Feb 12 13:21:23.245: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 22.408841ms)
Feb 12 13:21:23.246: INFO: (8) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 23.052776ms)
Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 24.253624ms)
Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 24.243926ms)
Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 24.087563ms)
Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 24.301597ms)
Feb 12 13:21:23.267: INFO: (9) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 20.064652ms)
Feb 12 13:21:23.267: INFO: (9) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 20.085128ms)
Feb 12 13:21:23.268: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.931298ms)
Feb 12 13:21:23.268: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 20.832647ms)
Feb 12 13:21:23.269: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 21.019532ms)
Feb 12 13:21:23.269: INFO: (9) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 20.830717ms)
Feb 12 13:21:23.270: INFO: (9) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 22.505309ms)
Feb 12 13:21:23.270: INFO: (9) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.269462ms)
Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 23.337456ms)
Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 23.329189ms)
Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 23.773332ms)
Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 24.011385ms)
Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 23.931949ms)
Feb 12 13:21:23.272: INFO: (9) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 24.224549ms)
Feb 12 13:21:23.272: INFO: (9) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 24.660504ms)
Feb 12 13:21:23.272: INFO: (9) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 13.351532ms)
Feb 12 13:21:23.288: INFO: (10) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 15.170195ms)
Feb 12 13:21:23.289: INFO: (10) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 16.518039ms)
Feb 12 13:21:23.290: INFO: (10) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 17.57662ms)
Feb 12 13:21:23.290: INFO: (10) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 17.235788ms)
Feb 12 13:21:23.291: INFO: (10) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 17.651159ms)
Feb 12 13:21:23.291: INFO: (10) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 18.110922ms)
Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 18.673746ms)
Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 19.109903ms)
Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 19.007639ms)
Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 13.979902ms)
Feb 12 13:21:23.309: INFO: (11) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.546141ms)
Feb 12 13:21:23.309: INFO: (11) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 13.8313ms)
Feb 12 13:21:23.314: INFO: (11) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 18.956171ms)
Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.168285ms)
Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 19.400994ms)
Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 19.847241ms)
Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.516438ms)
Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 19.590845ms)
Feb 12 13:21:23.316: INFO: (11) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 20.652449ms)
Feb 12 13:21:23.316: INFO: (11) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 20.865759ms)
Feb 12 13:21:23.317: INFO: (11) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 21.547579ms)
Feb 12 13:21:23.318: INFO: (11) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 21.996981ms)
Feb 12 13:21:23.332: INFO: (12) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 14.344227ms)
Feb 12 13:21:23.333: INFO: (12) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 14.852734ms)
Feb 12 13:21:23.333: INFO: (12) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 14.880217ms)
Feb 12 13:21:23.334: INFO: (12) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 16.12148ms)
Feb 12 13:21:23.334: INFO: (12) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 16.471167ms)
Feb 12 13:21:23.336: INFO: (12) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 17.895805ms)
Feb 12 13:21:23.336: INFO: (12) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 18.146696ms)
Feb 12 13:21:23.339: INFO: (12) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 20.966505ms)
Feb 12 13:21:23.339: INFO: (12) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 21.001656ms)
Feb 12 13:21:23.340: INFO: (12) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 21.612141ms)
Feb 12 13:21:23.340: INFO: (12) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 22.431171ms)
Feb 12 13:21:23.341: INFO: (12) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.470484ms)
Feb 12 13:21:23.344: INFO: (12) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 26.136588ms)
Feb 12 13:21:23.365: INFO: (13) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 19.488384ms)
Feb 12 13:21:23.365: INFO: (13) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 19.70173ms)
Feb 12 13:21:23.366: INFO: (13) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 20.530135ms)
Feb 12 13:21:23.366: INFO: (13) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 21.524262ms)
Feb 12 13:21:23.366: INFO: (13) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 21.881299ms)
Feb 12 13:21:23.368: INFO: (13) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 22.669281ms)
Feb 12 13:21:23.368: INFO: (13) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 22.745586ms)
Feb 12 13:21:23.369: INFO: (13) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 24.248851ms)
Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 25.04042ms)
Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 25.665253ms)
Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 25.675233ms)
Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 25.59017ms)
Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 25.400596ms)
Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 25.469114ms)
Feb 12 13:21:23.380: INFO: (14) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 9.735146ms)
Feb 12 13:21:23.380: INFO: (14) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.525161ms)
Feb 12 13:21:23.381: INFO: (14) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.960071ms)
Feb 12 13:21:23.381: INFO: (14) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 10.333896ms)
Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 10.726485ms)
Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 11.016053ms)
Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 11.221689ms)
Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 11.187611ms)
Feb 12 13:21:23.384: INFO: (14) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 12.797384ms)
Feb 12 13:21:23.385: INFO: (14) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 13.711374ms)
Feb 12 13:21:23.385: INFO: (14) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 14.052491ms)
Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 16.890487ms)
Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 16.839215ms)
Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 17.267706ms)
Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 17.266544ms)
Feb 12 13:21:23.399: INFO: (15) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 10.809834ms)
Feb 12 13:21:23.400: INFO: (15) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 11.459416ms)
Feb 12 13:21:23.401: INFO: (15) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 12.191698ms)
Feb 12 13:21:23.401: INFO: (15) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 12.16259ms)
Feb 12 13:21:23.401: INFO: (15) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 13.047804ms)
Feb 12 13:21:23.402: INFO: (15) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.327138ms)
Feb 12 13:21:23.402: INFO: (15) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 13.198804ms)
Feb 12 13:21:23.402: INFO: (15) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 16.953861ms)
Feb 12 13:21:23.407: INFO: (15) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 18.747787ms)
Feb 12 13:21:23.407: INFO: (15) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 19.055153ms)
Feb 12 13:21:23.407: INFO: (15) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 18.922822ms)
Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 22.428887ms)
Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 22.533258ms)
Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.548274ms)
Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 22.569119ms)
Feb 12 13:21:23.430: INFO: (16) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 18.654526ms)
Feb 12 13:21:23.431: INFO: (16) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 19.118942ms)
Feb 12 13:21:23.431: INFO: (16) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 19.29589ms)
Feb 12 13:21:23.431: INFO: (16) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 19.12447ms)
Feb 12 13:21:23.432: INFO: (16) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 20.137537ms)
Feb 12 13:21:23.432: INFO: (16) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 20.514447ms)
Feb 12 13:21:23.434: INFO: (16) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 22.70538ms)
Feb 12 13:21:23.443: INFO: (16) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 35.071419ms)
Feb 12 13:21:23.447: INFO: (16) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 35.529425ms)
Feb 12 13:21:23.470: INFO: (17) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 22.994159ms)
Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 24.23728ms)
Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 24.425702ms)
Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 25.147243ms)
Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 24.835234ms)
Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 24.959793ms)
Feb 12 13:21:23.473: INFO: (17) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 26.016409ms)
Feb 12 13:21:23.473: INFO: (17) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 27.332396ms)
Feb 12 13:21:23.474: INFO: (17) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 27.173731ms)
Feb 12 13:21:23.474: INFO: (17) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 27.242632ms)
Feb 12 13:21:23.474: INFO: (17) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 27.149875ms)
Feb 12 13:21:23.477: INFO: (17) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 29.313135ms)
Feb 12 13:21:23.477: INFO: (17) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 29.84559ms)
Feb 12 13:21:23.478: INFO: (17) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 30.821235ms)
Feb 12 13:21:23.478: INFO: (17) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 31.14268ms)
Feb 12 13:21:23.509: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 31.053026ms)
Feb 12 13:21:23.510: INFO: (18) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 31.83658ms)
Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 32.038962ms)
Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 32.673962ms)
Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 33.071601ms)
Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 32.939108ms)
Feb 12 13:21:23.512: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 33.084952ms)
Feb 12 13:21:23.512: INFO: (18) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 33.385858ms)
Feb 12 13:21:23.512: INFO: (18) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 17.769879ms)
Feb 12 13:21:23.541: INFO: (19) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 24.046258ms)
Feb 12 13:21:23.544: INFO: (19) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 25.009011ms)
Feb 12 13:21:23.545: INFO: (19) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 25.120219ms)
Feb 12 13:21:23.545: INFO: (19) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 25.623354ms)
Feb 12 13:21:23.546: INFO: (19) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 26.660011ms)
Feb 12 13:21:23.546: INFO: (19) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 27.152368ms)
Feb 12 13:21:23.547: INFO: (19) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 27.294536ms)
Feb 12 13:21:23.547: INFO: (19) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 27.570337ms)
STEP: deleting ReplicationController proxy-service-7fx4t in namespace proxy-7982, will wait for the garbage collector to delete the pods
Feb 12 13:21:23.615: INFO: Deleting ReplicationController proxy-service-7fx4t took: 10.60275ms
Feb 12 13:21:23.916: INFO: Terminating ReplicationController proxy-service-7fx4t pods took: 301.132236ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:21:36.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7982" for this suite.
Feb 12 13:21:42.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:21:42.777: INFO: namespace proxy-7982 deletion completed in 6.147856311s

• [SLOW TEST:34.097 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:21:42.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 12 13:21:42.923: INFO: Waiting up to 5m0s for pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c" in namespace "downward-api-558" to be "success or failure"
Feb 12 13:21:42.944: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.211981ms
Feb 12 13:21:44.952: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029370128s
Feb 12 13:21:46.961: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038202206s
Feb 12 13:21:48.970: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047236297s
Feb 12 13:21:50.976: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052789519s
Feb 12 13:21:52.984: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061471275s
STEP: Saw pod success
Feb 12 13:21:52.985: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c" satisfied condition "success or failure"
Feb 12 13:21:52.988: INFO: Trying to get logs from node iruya-node pod downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c container dapi-container: 
STEP: delete the pod
Feb 12 13:21:53.515: INFO: Waiting for pod downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c to disappear
Feb 12 13:21:53.532: INFO: Pod downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:21:53.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-558" for this suite.
Feb 12 13:21:59.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:21:59.954: INFO: namespace downward-api-558 deletion completed in 6.400635991s

• [SLOW TEST:17.176 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:21:59.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 12 13:22:08.691: INFO: Successfully updated pod "labelsupdate759b199d-f4f9-4e8a-a943-0b80a6c4e3d8"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:22:12.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2060" for this suite.
Feb 12 13:22:34.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:22:35.003: INFO: namespace downward-api-2060 deletion completed in 22.18009327s

• [SLOW TEST:35.049 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:22:35.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:22:35.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0" in namespace "downward-api-4916" to be "success or failure"
Feb 12 13:22:35.107: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.781359ms
Feb 12 13:22:37.118: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017591931s
Feb 12 13:22:39.129: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02820845s
Feb 12 13:22:41.140: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039646332s
Feb 12 13:22:43.148: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047324493s
Feb 12 13:22:45.165: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064201717s
STEP: Saw pod success
Feb 12 13:22:45.165: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0" satisfied condition "success or failure"
Feb 12 13:22:45.172: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0 container client-container: 
STEP: delete the pod
Feb 12 13:22:45.351: INFO: Waiting for pod downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0 to disappear
Feb 12 13:22:45.390: INFO: Pod downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:22:45.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4916" for this suite.
Feb 12 13:22:51.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:22:51.725: INFO: namespace downward-api-4916 deletion completed in 6.327443869s

• [SLOW TEST:16.722 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:22:51.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 12 13:23:12.000: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:12.038: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:14.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:14.045: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:16.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:16.045: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:18.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:18.046: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:20.039: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:20.051: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:22.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:22.047: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:24.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:24.049: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:26.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:26.043: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:28.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:28.046: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:30.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:30.049: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:32.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:32.045: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:34.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:34.059: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:36.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:36.049: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 13:23:38.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 13:23:38.046: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:23:38.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7831" for this suite.
Feb 12 13:24:00.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:24:00.295: INFO: namespace container-lifecycle-hook-7831 deletion completed in 22.215434591s

• [SLOW TEST:68.569 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:24:00.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 12 13:24:00.425: INFO: Waiting up to 5m0s for pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67" in namespace "containers-652" to be "success or failure"
Feb 12 13:24:00.448: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 22.425099ms
Feb 12 13:24:02.456: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030859261s
Feb 12 13:24:04.467: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0413931s
Feb 12 13:24:06.477: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051525813s
Feb 12 13:24:08.492: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066184723s
Feb 12 13:24:10.505: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079625772s
STEP: Saw pod success
Feb 12 13:24:10.505: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67" satisfied condition "success or failure"
Feb 12 13:24:10.516: INFO: Trying to get logs from node iruya-node pod client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67 container test-container: 
STEP: delete the pod
Feb 12 13:24:10.586: INFO: Waiting for pod client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67 to disappear
Feb 12 13:24:10.618: INFO: Pod client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:24:10.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-652" for this suite.
Feb 12 13:24:16.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:24:16.757: INFO: namespace containers-652 deletion completed in 6.13085404s

• [SLOW TEST:16.461 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:24:16.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3552
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3552
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3552
Feb 12 13:24:16.897: INFO: Found 0 stateful pods, waiting for 1
Feb 12 13:24:26.912: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 12 13:24:26.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:24:29.899: INFO: stderr: "I0212 13:24:29.382714    1336 log.go:172] (0xc000b86420) (0xc0004268c0) Create stream\nI0212 13:24:29.382951    1336 log.go:172] (0xc000b86420) (0xc0004268c0) Stream added, broadcasting: 1\nI0212 13:24:29.391835    1336 log.go:172] (0xc000b86420) Reply frame received for 1\nI0212 13:24:29.391931    1336 log.go:172] (0xc000b86420) (0xc0007140a0) Create stream\nI0212 13:24:29.391949    1336 log.go:172] (0xc000b86420) (0xc0007140a0) Stream added, broadcasting: 3\nI0212 13:24:29.394183    1336 log.go:172] (0xc000b86420) Reply frame received for 3\nI0212 13:24:29.394299    1336 log.go:172] (0xc000b86420) (0xc000714140) Create stream\nI0212 13:24:29.394317    1336 log.go:172] (0xc000b86420) (0xc000714140) Stream added, broadcasting: 5\nI0212 13:24:29.398010    1336 log.go:172] (0xc000b86420) Reply frame received for 5\nI0212 13:24:29.586842    1336 log.go:172] (0xc000b86420) Data frame received for 5\nI0212 13:24:29.586923    1336 log.go:172] (0xc000714140) (5) Data frame handling\nI0212 13:24:29.586958    1336 log.go:172] (0xc000714140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:24:29.697916    1336 log.go:172] (0xc000b86420) Data frame received for 3\nI0212 13:24:29.698421    1336 log.go:172] (0xc0007140a0) (3) Data frame handling\nI0212 13:24:29.698541    1336 log.go:172] (0xc0007140a0) (3) Data frame sent\nI0212 13:24:29.874832    1336 log.go:172] (0xc000b86420) (0xc0007140a0) Stream removed, broadcasting: 3\nI0212 13:24:29.875303    1336 log.go:172] (0xc000b86420) Data frame received for 1\nI0212 13:24:29.875852    1336 log.go:172] (0xc000b86420) (0xc000714140) Stream removed, broadcasting: 5\nI0212 13:24:29.876543    1336 log.go:172] (0xc0004268c0) (1) Data frame handling\nI0212 13:24:29.876917    1336 log.go:172] (0xc0004268c0) (1) Data frame sent\nI0212 13:24:29.876972    1336 log.go:172] (0xc000b86420) (0xc0004268c0) Stream removed, broadcasting: 1\nI0212 13:24:29.877074    1336 log.go:172] (0xc000b86420) Go away received\nI0212 13:24:29.878734    1336 log.go:172] (0xc000b86420) (0xc0004268c0) Stream removed, broadcasting: 1\nI0212 13:24:29.878788    1336 log.go:172] (0xc000b86420) (0xc0007140a0) Stream removed, broadcasting: 3\nI0212 13:24:29.878802    1336 log.go:172] (0xc000b86420) (0xc000714140) Stream removed, broadcasting: 5\n"
Feb 12 13:24:29.899: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:24:29.899: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:24:29.912: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 12 13:24:39.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:24:39.922: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 13:24:39.956: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999467s
Feb 12 13:24:40.967: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983518652s
Feb 12 13:24:41.975: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971996939s
Feb 12 13:24:42.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963680321s
Feb 12 13:24:43.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955369968s
Feb 12 13:24:45.002: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.946329435s
Feb 12 13:24:46.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.937413167s
Feb 12 13:24:47.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.907020525s
Feb 12 13:24:48.079: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.86539977s
Feb 12 13:24:49.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 860.012853ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3552
Feb 12 13:24:50.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:24:51.081: INFO: stderr: "I0212 13:24:50.449022    1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Create stream\nI0212 13:24:50.449555    1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Stream added, broadcasting: 1\nI0212 13:24:50.457031    1369 log.go:172] (0xc000a3a2c0) Reply frame received for 1\nI0212 13:24:50.457089    1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Create stream\nI0212 13:24:50.457099    1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Stream added, broadcasting: 3\nI0212 13:24:50.459887    1369 log.go:172] (0xc000a3a2c0) Reply frame received for 3\nI0212 13:24:50.460091    1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Create stream\nI0212 13:24:50.460102    1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Stream added, broadcasting: 5\nI0212 13:24:50.464075    1369 log.go:172] (0xc000a3a2c0) Reply frame received for 5\nI0212 13:24:50.870358    1369 log.go:172] (0xc000a3a2c0) Data frame received for 3\nI0212 13:24:50.871115    1369 log.go:172] (0xc000a3a2c0) Data frame received for 5\nI0212 13:24:50.871283    1369 log.go:172] (0xc0009700a0) (5) Data frame handling\nI0212 13:24:50.871738    1369 log.go:172] (0xc0009700a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:24:50.872243    1369 log.go:172] (0xc000970000) (3) Data frame handling\nI0212 13:24:50.873267    1369 log.go:172] (0xc000970000) (3) Data frame sent\nI0212 13:24:51.064346    1369 log.go:172] (0xc000a3a2c0) Data frame received for 1\nI0212 13:24:51.064497    1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Stream removed, broadcasting: 5\nI0212 13:24:51.064575    1369 log.go:172] (0xc0008e85a0) (1) Data frame handling\nI0212 13:24:51.064602    1369 log.go:172] (0xc0008e85a0) (1) Data frame sent\nI0212 13:24:51.064628    1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Stream removed, broadcasting: 3\nI0212 13:24:51.064670    1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Stream removed, broadcasting: 1\nI0212 13:24:51.064691    1369 log.go:172] (0xc000a3a2c0) Go away received\nI0212 13:24:51.065764    1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Stream removed, broadcasting: 1\nI0212 13:24:51.065785    1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Stream removed, broadcasting: 3\nI0212 13:24:51.065795    1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Stream removed, broadcasting: 5\n"
Feb 12 13:24:51.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:24:51.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:24:51.128: INFO: Found 1 stateful pods, waiting for 3
Feb 12 13:25:01.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 13:25:01.140: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 13:25:01.140: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 13:25:11.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 13:25:11.140: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 13:25:11.140: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 12 13:25:11.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:25:11.685: INFO: stderr: "I0212 13:25:11.392015    1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Create stream\nI0212 13:25:11.392285    1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Stream added, broadcasting: 1\nI0212 13:25:11.397785    1390 log.go:172] (0xc00012a8f0) Reply frame received for 1\nI0212 13:25:11.397818    1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Create stream\nI0212 13:25:11.397825    1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Stream added, broadcasting: 3\nI0212 13:25:11.399631    1390 log.go:172] (0xc00012a8f0) Reply frame received for 3\nI0212 13:25:11.401798    1390 log.go:172] (0xc00012a8f0) (0xc000768000) Create stream\nI0212 13:25:11.402083    1390 log.go:172] (0xc00012a8f0) (0xc000768000) Stream added, broadcasting: 5\nI0212 13:25:11.408082    1390 log.go:172] (0xc00012a8f0) Reply frame received for 5\nI0212 13:25:11.545670    1390 log.go:172] (0xc00012a8f0) Data frame received for 5\nI0212 13:25:11.545743    1390 log.go:172] (0xc000768000) (5) Data frame handling\nI0212 13:25:11.545769    1390 log.go:172] (0xc000768000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:25:11.556162    1390 log.go:172] (0xc00012a8f0) Data frame received for 3\nI0212 13:25:11.556177    1390 log.go:172] (0xc0005c0be0) (3) Data frame handling\nI0212 13:25:11.556192    1390 log.go:172] (0xc0005c0be0) (3) Data frame sent\nI0212 13:25:11.671736    1390 log.go:172] (0xc00012a8f0) Data frame received for 1\nI0212 13:25:11.671784    1390 log.go:172] (0xc0005c0b40) (1) Data frame handling\nI0212 13:25:11.671814    1390 log.go:172] (0xc0005c0b40) (1) Data frame sent\nI0212 13:25:11.672071    1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Stream removed, broadcasting: 1\nI0212 13:25:11.673116    1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Stream removed, broadcasting: 3\nI0212 13:25:11.673197    1390 log.go:172] (0xc00012a8f0) (0xc000768000) Stream removed, broadcasting: 5\nI0212 13:25:11.673239    1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Stream removed, broadcasting: 1\nI0212 13:25:11.673247    1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Stream removed, broadcasting: 3\nI0212 13:25:11.673252    1390 log.go:172] (0xc00012a8f0) (0xc000768000) Stream removed, broadcasting: 5\n"
Feb 12 13:25:11.685: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:25:11.685: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:25:11.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:25:12.253: INFO: stderr: "I0212 13:25:11.878301    1414 log.go:172] (0xc000830370) (0xc0008686e0) Create stream\nI0212 13:25:11.878829    1414 log.go:172] (0xc000830370) (0xc0008686e0) Stream added, broadcasting: 1\nI0212 13:25:11.885733    1414 log.go:172] (0xc000830370) Reply frame received for 1\nI0212 13:25:11.885778    1414 log.go:172] (0xc000830370) (0xc00065a1e0) Create stream\nI0212 13:25:11.885792    1414 log.go:172] (0xc000830370) (0xc00065a1e0) Stream added, broadcasting: 3\nI0212 13:25:11.886843    1414 log.go:172] (0xc000830370) Reply frame received for 3\nI0212 13:25:11.886875    1414 log.go:172] (0xc000830370) (0xc00065a280) Create stream\nI0212 13:25:11.886887    1414 log.go:172] (0xc000830370) (0xc00065a280) Stream added, broadcasting: 5\nI0212 13:25:11.887658    1414 log.go:172] (0xc000830370) Reply frame received for 5\nI0212 13:25:12.012563    1414 log.go:172] (0xc000830370) Data frame received for 5\nI0212 13:25:12.012649    1414 log.go:172] (0xc00065a280) (5) Data frame handling\nI0212 13:25:12.012666    1414 log.go:172] (0xc00065a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:25:12.166468    1414 log.go:172] (0xc000830370) Data frame received for 3\nI0212 13:25:12.166623    1414 log.go:172] (0xc00065a1e0) (3) Data frame handling\nI0212 13:25:12.166665    1414 log.go:172] (0xc00065a1e0) (3) Data frame sent\nI0212 13:25:12.243804    1414 log.go:172] (0xc000830370) (0xc00065a280) Stream removed, broadcasting: 5\nI0212 13:25:12.243845    1414 log.go:172] (0xc000830370) Data frame received for 1\nI0212 13:25:12.243876    1414 log.go:172] (0xc0008686e0) (1) Data frame handling\nI0212 13:25:12.243889    1414 log.go:172] (0xc000830370) (0xc00065a1e0) Stream removed, broadcasting: 3\nI0212 13:25:12.243970    1414 log.go:172] (0xc0008686e0) (1) Data frame sent\nI0212 13:25:12.243994    1414 log.go:172] (0xc000830370) (0xc0008686e0) Stream removed, broadcasting: 1\nI0212 13:25:12.244081    1414 log.go:172] (0xc000830370) Go away received\nI0212 13:25:12.244792    1414 log.go:172] (0xc000830370) (0xc0008686e0) Stream removed, broadcasting: 1\nI0212 13:25:12.244804    1414 log.go:172] (0xc000830370) (0xc00065a1e0) Stream removed, broadcasting: 3\nI0212 13:25:12.244812    1414 log.go:172] (0xc000830370) (0xc00065a280) Stream removed, broadcasting: 5\n"
Feb 12 13:25:12.253: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:25:12.253: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:25:12.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:25:12.863: INFO: stderr: "I0212 13:25:12.448544    1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Create stream\nI0212 13:25:12.448933    1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Stream added, broadcasting: 1\nI0212 13:25:12.457428    1434 log.go:172] (0xc0009b6420) Reply frame received for 1\nI0212 13:25:12.461260    1434 log.go:172] (0xc0009b6420) (0xc00044c780) Create stream\nI0212 13:25:12.461688    1434 log.go:172] (0xc0009b6420) (0xc00044c780) Stream added, broadcasting: 3\nI0212 13:25:12.471140    1434 log.go:172] (0xc0009b6420) Reply frame received for 3\nI0212 13:25:12.471227    1434 log.go:172] (0xc0009b6420) (0xc00044c000) Create stream\nI0212 13:25:12.471241    1434 log.go:172] (0xc0009b6420) (0xc00044c000) Stream added, broadcasting: 5\nI0212 13:25:12.473479    1434 log.go:172] (0xc0009b6420) Reply frame received for 5\nI0212 13:25:12.672297    1434 log.go:172] (0xc0009b6420) Data frame received for 5\nI0212 13:25:12.672437    1434 log.go:172] (0xc00044c000) (5) Data frame handling\nI0212 13:25:12.672530    1434 log.go:172] (0xc00044c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:25:12.720936    1434 log.go:172] (0xc0009b6420) Data frame received for 3\nI0212 13:25:12.721029    1434 log.go:172] (0xc00044c780) (3) Data frame handling\nI0212 13:25:12.721056    1434 log.go:172] (0xc00044c780) (3) Data frame sent\nI0212 13:25:12.845759    1434 log.go:172] (0xc0009b6420) Data frame received for 1\nI0212 13:25:12.845878    1434 log.go:172] (0xc00044c6e0) (1) Data frame handling\nI0212 13:25:12.845901    1434 log.go:172] (0xc00044c6e0) (1) Data frame sent\nI0212 13:25:12.845941    1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Stream removed, broadcasting: 1\nI0212 13:25:12.846388    1434 log.go:172] (0xc0009b6420) (0xc00044c780) Stream removed, broadcasting: 3\nI0212 13:25:12.846699    1434 log.go:172] (0xc0009b6420) (0xc00044c000) Stream removed, broadcasting: 5\nI0212 13:25:12.846768    1434 log.go:172] (0xc0009b6420) Go away received\nI0212 13:25:12.848192    1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Stream removed, broadcasting: 1\nI0212 13:25:12.848231    1434 log.go:172] (0xc0009b6420) (0xc00044c780) Stream removed, broadcasting: 3\nI0212 13:25:12.848245    1434 log.go:172] (0xc0009b6420) (0xc00044c000) Stream removed, broadcasting: 5\n"
Feb 12 13:25:12.863: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:25:12.863: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:25:12.863: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 13:25:12.871: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 12 13:25:22.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:25:22.898: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:25:22.898: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:25:22.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999316s
Feb 12 13:25:23.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987230423s
Feb 12 13:25:24.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970458014s
Feb 12 13:25:26.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.952091315s
Feb 12 13:25:27.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.840322114s
Feb 12 13:25:28.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.827883457s
Feb 12 13:25:29.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.82055514s
Feb 12 13:25:30.116: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.808402549s
Feb 12 13:25:31.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.796339301s
Feb 12 13:25:32.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 780.892907ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3552
Feb 12 13:25:33.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:25:33.817: INFO: stderr: "I0212 13:25:33.423943    1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Create stream\nI0212 13:25:33.424201    1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Stream added, broadcasting: 1\nI0212 13:25:33.431849    1454 log.go:172] (0xc0009e4420) Reply frame received for 1\nI0212 13:25:33.431911    1454 log.go:172] (0xc0009e4420) (0xc000814000) Create stream\nI0212 13:25:33.431920    1454 log.go:172] (0xc0009e4420) (0xc000814000) Stream added, broadcasting: 3\nI0212 13:25:33.435531    1454 log.go:172] (0xc0009e4420) Reply frame received for 3\nI0212 13:25:33.435667    1454 log.go:172] (0xc0009e4420) (0xc000300780) Create stream\nI0212 13:25:33.435697    1454 log.go:172] (0xc0009e4420) (0xc000300780) Stream added, broadcasting: 5\nI0212 13:25:33.439232    1454 log.go:172] (0xc0009e4420) Reply frame received for 5\nI0212 13:25:33.595018    1454 log.go:172] (0xc0009e4420) Data frame received for 5\nI0212 13:25:33.595218    1454 log.go:172] (0xc000300780) (5) Data frame handling\nI0212 13:25:33.595293    1454 log.go:172] (0xc000300780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:25:33.597068    1454 log.go:172] (0xc0009e4420) Data frame received for 3\nI0212 13:25:33.597092    1454 log.go:172] (0xc000814000) (3) Data frame handling\nI0212 13:25:33.597129    1454 log.go:172] (0xc000814000) (3) Data frame sent\nI0212 13:25:33.804067    1454 log.go:172] (0xc0009e4420) Data frame received for 1\nI0212 13:25:33.804434    1454 log.go:172] (0xc0009e4420) (0xc000814000) Stream removed, broadcasting: 3\nI0212 13:25:33.804617    1454 log.go:172] (0xc0003006e0) (1) Data frame handling\nI0212 13:25:33.804690    1454 log.go:172] (0xc0003006e0) (1) Data frame sent\nI0212 13:25:33.805042    1454 log.go:172] (0xc0009e4420) (0xc000300780) Stream removed, broadcasting: 5\nI0212 13:25:33.805455    1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Stream removed, broadcasting: 1\nI0212 13:25:33.805524    1454 log.go:172] (0xc0009e4420) Go away received\nI0212 13:25:33.807418    1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Stream removed, broadcasting: 1\nI0212 13:25:33.807449    1454 log.go:172] (0xc0009e4420) (0xc000814000) Stream removed, broadcasting: 3\nI0212 13:25:33.807489    1454 log.go:172] (0xc0009e4420) (0xc000300780) Stream removed, broadcasting: 5\n"
Feb 12 13:25:33.817: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:25:33.818: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:25:33.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:25:34.385: INFO: stderr: "I0212 13:25:34.121071    1474 log.go:172] (0xc000896f20) (0xc00091ed20) Create stream\nI0212 13:25:34.121773    1474 log.go:172] (0xc000896f20) (0xc00091ed20) Stream added, broadcasting: 1\nI0212 13:25:34.142040    1474 log.go:172] (0xc000896f20) Reply frame received for 1\nI0212 13:25:34.142482    1474 log.go:172] (0xc000896f20) (0xc00091e000) Create stream\nI0212 13:25:34.142527    1474 log.go:172] (0xc000896f20) (0xc00091e000) Stream added, broadcasting: 3\nI0212 13:25:34.144080    1474 log.go:172] (0xc000896f20) Reply frame received for 3\nI0212 13:25:34.144182    1474 log.go:172] (0xc000896f20) (0xc000864000) Create stream\nI0212 13:25:34.144222    1474 log.go:172] (0xc000896f20) (0xc000864000) Stream added, broadcasting: 5\nI0212 13:25:34.146763    1474 log.go:172] (0xc000896f20) Reply frame received for 5\nI0212 13:25:34.267838    1474 log.go:172] (0xc000896f20) Data frame received for 3\nI0212 13:25:34.268279    1474 log.go:172] (0xc00091e000) (3) Data frame handling\nI0212 13:25:34.268388    1474 log.go:172] (0xc00091e000) (3) Data frame sent\nI0212 13:25:34.268515    1474 log.go:172] (0xc000896f20) Data frame received for 5\nI0212 13:25:34.268575    1474 log.go:172] (0xc000864000) (5) Data frame handling\nI0212 13:25:34.268629    1474 log.go:172] (0xc000864000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:25:34.377110    1474 log.go:172] (0xc000896f20) Data frame received for 1\nI0212 13:25:34.377331    1474 log.go:172] (0xc000896f20) (0xc000864000) Stream removed, broadcasting: 5\nI0212 13:25:34.377420    1474 log.go:172] (0xc00091ed20) (1) Data frame handling\nI0212 13:25:34.377469    1474 log.go:172] (0xc00091ed20) (1) Data frame sent\nI0212 13:25:34.377554    1474 log.go:172] (0xc000896f20) (0xc00091e000) Stream removed, broadcasting: 3\nI0212 13:25:34.377579    1474 log.go:172] (0xc000896f20) (0xc00091ed20) Stream removed, broadcasting: 1\nI0212 13:25:34.377597    1474 log.go:172] (0xc000896f20) Go away received\nI0212 13:25:34.379040    1474 log.go:172] (0xc000896f20) (0xc00091ed20) Stream removed, broadcasting: 1\nI0212 13:25:34.379058    1474 log.go:172] (0xc000896f20) (0xc00091e000) Stream removed, broadcasting: 3\nI0212 13:25:34.379065    1474 log.go:172] (0xc000896f20) (0xc000864000) Stream removed, broadcasting: 5\n"
Feb 12 13:25:34.385: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:25:34.386: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:25:34.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:25:34.871: INFO: stderr: "I0212 13:25:34.568628    1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Create stream\nI0212 13:25:34.569032    1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Stream added, broadcasting: 1\nI0212 13:25:34.577555    1490 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0212 13:25:34.577714    1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Create stream\nI0212 13:25:34.577728    1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Stream added, broadcasting: 3\nI0212 13:25:34.580259    1490 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0212 13:25:34.580303    1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Create stream\nI0212 13:25:34.580312    1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Stream added, broadcasting: 5\nI0212 13:25:34.582633    1490 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0212 13:25:34.764418    1490 log.go:172] (0xc000104dc0) Data frame received for 3\nI0212 13:25:34.764598    1490 log.go:172] (0xc0005e4820) (3) Data frame handling\nI0212 13:25:34.764625    1490 log.go:172] (0xc0005e4820) (3) Data frame sent\nI0212 13:25:34.764673    1490 log.go:172] (0xc000104dc0) Data frame received for 5\nI0212 13:25:34.764702    1490 log.go:172] (0xc0008a4000) (5) Data frame handling\nI0212 13:25:34.764732    1490 log.go:172] (0xc0008a4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:25:34.861208    1490 log.go:172] (0xc000104dc0) Data frame received for 1\nI0212 13:25:34.861353    1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Stream removed, broadcasting: 5\nI0212 13:25:34.861422    1490 log.go:172] (0xc0005e4780) (1) Data frame handling\nI0212 13:25:34.861438    1490 log.go:172] (0xc0005e4780) (1) Data frame sent\nI0212 13:25:34.861536    1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Stream removed, broadcasting: 3\nI0212 13:25:34.861584    1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Stream removed, broadcasting: 1\nI0212 13:25:34.861603    1490 log.go:172] (0xc000104dc0) Go away received\nI0212 13:25:34.863201    1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Stream removed, broadcasting: 1\nI0212 13:25:34.863360    1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Stream removed, broadcasting: 3\nI0212 13:25:34.863375    1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Stream removed, broadcasting: 5\n"
Feb 12 13:25:34.871: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:25:34.871: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:25:34.871: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 12 13:26:04.925: INFO: Deleting all statefulset in ns statefulset-3552
Feb 12 13:26:04.931: INFO: Scaling statefulset ss to 0
Feb 12 13:26:04.942: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 13:26:04.945: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:26:04.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3552" for this suite.
Feb 12 13:26:11.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:26:11.295: INFO: namespace statefulset-3552 deletion completed in 6.315038978s

• [SLOW TEST:114.538 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:26:11.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 13:26:11.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-227'
Feb 12 13:26:11.538: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 13:26:11.539: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 12 13:26:11.550: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 12 13:26:11.600: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 12 13:26:11.664: INFO: scanned /root for discovery docs: 
Feb 12 13:26:11.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-227'
Feb 12 13:26:35.253: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 12 13:26:35.253: INFO: stdout: "Created e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f\nScaling up e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 12 13:26:35.253: INFO: stdout: "Created e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f\nScaling up e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 12 13:26:35.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-227'
Feb 12 13:26:35.383: INFO: stderr: ""
Feb 12 13:26:35.383: INFO: stdout: "e2e-test-nginx-rc-6nds5 e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 12 13:26:40.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-227'
Feb 12 13:26:40.618: INFO: stderr: ""
Feb 12 13:26:40.618: INFO: stdout: "e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd "
Feb 12 13:26:40.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227'
Feb 12 13:26:40.765: INFO: stderr: ""
Feb 12 13:26:40.765: INFO: stdout: "true"
Feb 12 13:26:40.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227'
Feb 12 13:26:40.888: INFO: stderr: ""
Feb 12 13:26:40.888: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 12 13:26:40.888: INFO: e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 12 13:26:40.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-227'
Feb 12 13:26:41.041: INFO: stderr: ""
Feb 12 13:26:41.042: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:26:41.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-227" for this suite.
Feb 12 13:27:03.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:27:03.167: INFO: namespace kubectl-227 deletion completed in 22.111630507s

• [SLOW TEST:51.871 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:27:03.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 12 13:27:13.877: INFO: Successfully updated pod "labelsupdate603740fc-3883-4c69-acc1-6e4d27cb2ae5"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:27:16.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6548" for this suite.
Feb 12 13:27:38.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:27:38.195: INFO: namespace projected-6548 deletion completed in 22.164199116s

• [SLOW TEST:35.028 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:27:38.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-e207788b-2fd8-4536-b8c1-3c9857f202fa
STEP: Creating a pod to test consume secrets
Feb 12 13:27:39.190: INFO: Waiting up to 5m0s for pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58" in namespace "secrets-616" to be "success or failure"
Feb 12 13:27:39.233: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 43.425132ms
Feb 12 13:27:41.355: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165320172s
Feb 12 13:27:43.370: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179983762s
Feb 12 13:27:45.380: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190169755s
Feb 12 13:27:47.396: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Running", Reason="", readiness=true. Elapsed: 8.205984794s
Feb 12 13:27:49.405: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.215454247s
STEP: Saw pod success
Feb 12 13:27:49.405: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58" satisfied condition "success or failure"
Feb 12 13:27:49.418: INFO: Trying to get logs from node iruya-node pod pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58 container secret-volume-test: 
STEP: delete the pod
Feb 12 13:27:49.517: INFO: Waiting for pod pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58 to disappear
Feb 12 13:27:49.527: INFO: Pod pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:27:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-616" for this suite.
Feb 12 13:27:55.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:27:55.715: INFO: namespace secrets-616 deletion completed in 6.180965928s

• [SLOW TEST:17.520 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:27:55.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 12 13:27:55.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073731,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 12 13:27:55.864: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073731,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 12 13:28:05.885: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073745,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 12 13:28:05.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073745,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 12 13:28:15.918: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073758,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 12 13:28:15.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073758,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 12 13:28:25.937: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073772,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 12 13:28:25.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073772,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 12 13:28:35.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073786,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 12 13:28:35.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073786,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 12 13:28:45.974: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073800,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 12 13:28:45.974: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073800,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:28:55.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1678" for this suite.
Feb 12 13:29:02.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:29:02.294: INFO: namespace watch-1678 deletion completed in 6.288575426s

• [SLOW TEST:66.578 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:29:02.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-674dcefa-c6da-4aad-9aec-9607b0a15dd1
STEP: Creating a pod to test consume configMaps
Feb 12 13:29:02.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd" in namespace "configmap-8960" to be "success or failure"
Feb 12 13:29:02.440: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.948935ms
Feb 12 13:29:04.450: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020096491s
Feb 12 13:29:06.498: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067707357s
Feb 12 13:29:08.516: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086037996s
Feb 12 13:29:10.538: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107496867s
STEP: Saw pod success
Feb 12 13:29:10.538: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd" satisfied condition "success or failure"
Feb 12 13:29:10.544: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:29:10.622: INFO: Waiting for pod pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd to disappear
Feb 12 13:29:10.687: INFO: Pod pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:29:10.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8960" for this suite.
Feb 12 13:29:16.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:29:16.890: INFO: namespace configmap-8960 deletion completed in 6.195119397s

• [SLOW TEST:14.596 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:29:16.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-4b8ca7ed-ecd4-4ade-9219-b4c7b85b7736
STEP: Creating a pod to test consume secrets
Feb 12 13:29:17.130: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e" in namespace "projected-2548" to be "success or failure"
Feb 12 13:29:17.183: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 53.040593ms
Feb 12 13:29:19.192: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062078652s
Feb 12 13:29:21.202: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071440807s
Feb 12 13:29:23.217: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087048684s
Feb 12 13:29:25.228: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097893155s
Feb 12 13:29:27.237: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106992779s
STEP: Saw pod success
Feb 12 13:29:27.237: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e" satisfied condition "success or failure"
Feb 12 13:29:27.242: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 13:29:27.365: INFO: Waiting for pod pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e to disappear
Feb 12 13:29:27.388: INFO: Pod pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:29:27.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2548" for this suite.
Feb 12 13:29:33.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:29:33.563: INFO: namespace projected-2548 deletion completed in 6.167299692s

• [SLOW TEST:16.673 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:29:33.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 12 13:29:33.709: INFO: Waiting up to 5m0s for pod "pod-72266e99-9127-4eea-abce-6012e85a6a16" in namespace "emptydir-2717" to be "success or failure"
Feb 12 13:29:33.719: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088065ms
Feb 12 13:29:35.727: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017918868s
Feb 12 13:29:37.776: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06692319s
Feb 12 13:29:39.794: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085452507s
Feb 12 13:29:41.804: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Running", Reason="", readiness=true. Elapsed: 8.095348058s
Feb 12 13:29:43.816: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10665636s
STEP: Saw pod success
Feb 12 13:29:43.816: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16" satisfied condition "success or failure"
Feb 12 13:29:43.819: INFO: Trying to get logs from node iruya-node pod pod-72266e99-9127-4eea-abce-6012e85a6a16 container test-container: 
STEP: delete the pod
Feb 12 13:29:43.921: INFO: Waiting for pod pod-72266e99-9127-4eea-abce-6012e85a6a16 to disappear
Feb 12 13:29:43.937: INFO: Pod pod-72266e99-9127-4eea-abce-6012e85a6a16 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:29:43.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2717" for this suite.
Feb 12 13:29:50.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:29:50.097: INFO: namespace emptydir-2717 deletion completed in 6.15438384s

• [SLOW TEST:16.534 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:29:50.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-cd225ba9-0d4e-4c45-be32-9f924c7a2f8a
STEP: Creating a pod to test consume secrets
Feb 12 13:29:50.260: INFO: Waiting up to 5m0s for pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6" in namespace "secrets-3220" to be "success or failure"
Feb 12 13:29:50.347: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 87.303074ms
Feb 12 13:29:52.354: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094185232s
Feb 12 13:29:54.361: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10127209s
Feb 12 13:29:56.375: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114794392s
Feb 12 13:29:58.385: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Running", Reason="", readiness=true. Elapsed: 8.124634293s
Feb 12 13:30:00.393: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132911431s
STEP: Saw pod success
Feb 12 13:30:00.393: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6" satisfied condition "success or failure"
Feb 12 13:30:00.398: INFO: Trying to get logs from node iruya-node pod pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6 container secret-volume-test: 
STEP: delete the pod
Feb 12 13:30:00.530: INFO: Waiting for pod pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6 to disappear
Feb 12 13:30:00.540: INFO: Pod pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:30:00.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3220" for this suite.
Feb 12 13:30:06.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:30:06.717: INFO: namespace secrets-3220 deletion completed in 6.170720267s

• [SLOW TEST:16.619 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:30:06.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 12 13:30:06.912: INFO: Waiting up to 5m0s for pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c" in namespace "emptydir-4899" to be "success or failure"
Feb 12 13:30:06.953: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.176518ms
Feb 12 13:30:08.978: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065328035s
Feb 12 13:30:10.987: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073732815s
Feb 12 13:30:13.045: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132477406s
Feb 12 13:30:15.093: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180204557s
Feb 12 13:30:17.137: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224476203s
STEP: Saw pod success
Feb 12 13:30:17.138: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c" satisfied condition "success or failure"
Feb 12 13:30:17.141: INFO: Trying to get logs from node iruya-node pod pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c container test-container: 
STEP: delete the pod
Feb 12 13:30:17.214: INFO: Waiting for pod pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c to disappear
Feb 12 13:30:17.232: INFO: Pod pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:30:17.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4899" for this suite.
Feb 12 13:30:23.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:30:23.583: INFO: namespace emptydir-4899 deletion completed in 6.258801609s

• [SLOW TEST:16.866 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:30:23.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0212 13:30:27.287694       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 13:30:27.287: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:30:27.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4055" for this suite.
Feb 12 13:30:33.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:30:33.809: INFO: namespace gc-4055 deletion completed in 6.515526947s

• [SLOW TEST:10.225 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:30:33.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-edc4bd4d-3cf3-4e92-be8f-bc3281fbd80e
STEP: Creating a pod to test consume secrets
Feb 12 13:30:33.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1" in namespace "projected-4793" to be "success or failure"
Feb 12 13:30:33.968: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.935132ms
Feb 12 13:30:35.983: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036426905s
Feb 12 13:30:37.992: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045439972s
Feb 12 13:30:39.999: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052948211s
Feb 12 13:30:42.013: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066845054s
Feb 12 13:30:44.028: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081081724s
STEP: Saw pod success
Feb 12 13:30:44.028: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1" satisfied condition "success or failure"
Feb 12 13:30:44.035: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 13:30:44.111: INFO: Waiting for pod pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1 to disappear
Feb 12 13:30:44.257: INFO: Pod pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:30:44.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4793" for this suite.
Feb 12 13:30:50.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:30:50.506: INFO: namespace projected-4793 deletion completed in 6.233098611s

• [SLOW TEST:16.697 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:30:50.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 12 13:30:50.652: INFO: Waiting up to 5m0s for pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f" in namespace "containers-307" to be "success or failure"
Feb 12 13:30:50.664: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125109ms
Feb 12 13:30:52.674: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022090292s
Feb 12 13:30:54.689: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037161861s
Feb 12 13:30:56.704: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051907191s
Feb 12 13:30:58.713: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061211522s
Feb 12 13:31:00.721: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069137461s
STEP: Saw pod success
Feb 12 13:31:00.721: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f" satisfied condition "success or failure"
Feb 12 13:31:00.726: INFO: Trying to get logs from node iruya-node pod client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f container test-container: 
STEP: delete the pod
Feb 12 13:31:00.788: INFO: Waiting for pod client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f to disappear
Feb 12 13:31:00.798: INFO: Pod client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:31:00.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-307" for this suite.
Feb 12 13:31:06.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:31:07.005: INFO: namespace containers-307 deletion completed in 6.197107837s

• [SLOW TEST:16.498 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:31:07.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 13:31:07.200: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 12 13:31:12.208: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 13:31:14.217: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 12 13:31:16.227: INFO: Creating deployment "test-rollover-deployment"
Feb 12 13:31:16.251: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 12 13:31:18.265: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 12 13:31:18.275: INFO: Ensure that both replica sets have 1 created replica
Feb 12 13:31:18.282: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 12 13:31:18.291: INFO: Updating deployment test-rollover-deployment
Feb 12 13:31:18.291: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 12 13:31:20.317: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 12 13:31:20.348: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 12 13:31:20.354: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:20.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:22.373: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:22.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:24.662: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:24.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:26.374: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:26.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:28.372: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:28.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:30.373: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:30.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:32.374: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:32.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:34.372: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:34.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:36.368: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 13:31:36.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:38.391: INFO: 
Feb 12 13:31:38.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111098, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:31:40.486: INFO: 
Feb 12 13:31:40.486: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 12 13:31:40.515: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/deployments/test-rollover-deployment,UID:b72a898a-fd1e-45d9-ba2a-dcbd43705bf4,ResourceVersion:24074308,Generation:2,CreationTimestamp:2020-02-12 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-12 13:31:16 +0000 UTC 2020-02-12 13:31:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-12 13:31:38 +0000 UTC 2020-02-12 13:31:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 12 13:31:40.527: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/replicasets/test-rollover-deployment-854595fc44,UID:a45f9884-ff03-4ba8-b304-5b338f4c4508,ResourceVersion:24074297,Generation:2,CreationTimestamp:2020-02-12 13:31:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b72a898a-fd1e-45d9-ba2a-dcbd43705bf4 0xc0027b28c7 0xc0027b28c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 12 13:31:40.527: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 12 13:31:40.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/replicasets/test-rollover-controller,UID:8a376524-1c7e-48ec-ab9b-96c8792c3420,ResourceVersion:24074307,Generation:2,CreationTimestamp:2020-02-12 13:31:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b72a898a-fd1e-45d9-ba2a-dcbd43705bf4 0xc0027b27f7 0xc0027b27f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 13:31:40.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/replicasets/test-rollover-deployment-9b8b997cf,UID:1620bc1d-b07b-42ae-b784-c69f9994cf68,ResourceVersion:24074263,Generation:2,CreationTimestamp:2020-02-12 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b72a898a-fd1e-45d9-ba2a-dcbd43705bf4 0xc0027b2990 0xc0027b2991}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 13:31:40.533: INFO: Pod "test-rollover-deployment-854595fc44-grhns" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-grhns,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8605,SelfLink:/api/v1/namespaces/deployment-8605/pods/test-rollover-deployment-854595fc44-grhns,UID:f7373f0e-e2f7-4e9a-85cf-24ae6892ce5e,ResourceVersion:24074282,Generation:0,CreationTimestamp:2020-02-12 13:31:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 a45f9884-ff03-4ba8-b304-5b338f4c4508 0xc0027b3587 0xc0027b3588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qmnc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qmnc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qmnc8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b35f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b3610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-12 13:31:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-12 13:31:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e11dd11f43296079b16f9ffd073ff0ed25f445f14bd1618f17622d635d401f3c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:31:40.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8605" for this suite.
Feb 12 13:31:49.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:31:49.879: INFO: namespace deployment-8605 deletion completed in 9.341420258s

• [SLOW TEST:42.874 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:31:49.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 12 13:31:50.064: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7030,SelfLink:/api/v1/namespaces/watch-7030/configmaps/e2e-watch-test-resource-version,UID:ff60632f-d441-4b66-9e04-521ae04d5c32,ResourceVersion:24074366,Generation:0,CreationTimestamp:2020-02-12 13:31:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 12 13:31:50.064: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7030,SelfLink:/api/v1/namespaces/watch-7030/configmaps/e2e-watch-test-resource-version,UID:ff60632f-d441-4b66-9e04-521ae04d5c32,ResourceVersion:24074367,Generation:0,CreationTimestamp:2020-02-12 13:31:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:31:50.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7030" for this suite.
Feb 12 13:31:56.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:31:56.312: INFO: namespace watch-7030 deletion completed in 6.170711911s

• [SLOW TEST:6.432 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:31:56.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:31:56.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29" in namespace "downward-api-3882" to be "success or failure"
Feb 12 13:31:56.438: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 20.887475ms
Feb 12 13:31:58.450: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033187001s
Feb 12 13:32:00.460: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043426878s
Feb 12 13:32:02.474: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056927938s
Feb 12 13:32:04.484: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066893495s
Feb 12 13:32:06.503: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086312656s
STEP: Saw pod success
Feb 12 13:32:06.504: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29" satisfied condition "success or failure"
Feb 12 13:32:06.516: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29 container client-container: 
STEP: delete the pod
Feb 12 13:32:06.566: INFO: Waiting for pod downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29 to disappear
Feb 12 13:32:06.629: INFO: Pod downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:32:06.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3882" for this suite.
Feb 12 13:32:12.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:32:12.847: INFO: namespace downward-api-3882 deletion completed in 6.20920721s

• [SLOW TEST:16.535 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:32:12.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-5ebb989a-405f-4598-9b46-8bb486c06e71
STEP: Creating a pod to test consume configMaps
Feb 12 13:32:13.006: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e" in namespace "projected-56" to be "success or failure"
Feb 12 13:32:13.016: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.289221ms
Feb 12 13:32:15.021: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014718622s
Feb 12 13:32:17.040: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033106314s
Feb 12 13:32:19.047: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040546734s
Feb 12 13:32:21.149: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142670805s
Feb 12 13:32:23.157: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150494907s
STEP: Saw pod success
Feb 12 13:32:23.157: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e" satisfied condition "success or failure"
Feb 12 13:32:23.165: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 13:32:23.293: INFO: Waiting for pod pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e to disappear
Feb 12 13:32:23.296: INFO: Pod pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:32:23.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-56" for this suite.
Feb 12 13:32:29.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:32:29.523: INFO: namespace projected-56 deletion completed in 6.22254688s

• [SLOW TEST:16.675 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:32:29.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 12 13:32:29.627: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 12 13:32:29.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651'
Feb 12 13:32:30.372: INFO: stderr: ""
Feb 12 13:32:30.373: INFO: stdout: "service/redis-slave created\n"
Feb 12 13:32:30.373: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 12 13:32:30.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651'
Feb 12 13:32:30.911: INFO: stderr: ""
Feb 12 13:32:30.912: INFO: stdout: "service/redis-master created\n"
Feb 12 13:32:30.913: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 12 13:32:30.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651'
Feb 12 13:32:31.548: INFO: stderr: ""
Feb 12 13:32:31.548: INFO: stdout: "service/frontend created\n"
Feb 12 13:32:31.548: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 12 13:32:31.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651'
Feb 12 13:32:32.053: INFO: stderr: ""
Feb 12 13:32:32.053: INFO: stdout: "deployment.apps/frontend created\n"
Feb 12 13:32:32.053: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 12 13:32:32.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651'
Feb 12 13:32:32.720: INFO: stderr: ""
Feb 12 13:32:32.720: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 12 13:32:32.721: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 12 13:32:32.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651'
Feb 12 13:32:33.917: INFO: stderr: ""
Feb 12 13:32:33.917: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 12 13:32:33.917: INFO: Waiting for all frontend pods to be Running.
Feb 12 13:32:58.969: INFO: Waiting for frontend to serve content.
Feb 12 13:32:59.036: INFO: Trying to add a new entry to the guestbook.
Feb 12 13:32:59.088: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 12 13:32:59.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651'
Feb 12 13:32:59.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:32:59.321: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 13:32:59.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651'
Feb 12 13:32:59.475: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:32:59.476: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 13:32:59.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651'
Feb 12 13:32:59.657: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:32:59.657: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 13:32:59.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651'
Feb 12 13:32:59.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:32:59.767: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 13:32:59.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651'
Feb 12 13:32:59.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:32:59.910: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 13:32:59.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651'
Feb 12 13:33:00.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 13:33:00.211: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:33:00.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1651" for this suite.
Feb 12 13:33:40.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:33:40.924: INFO: namespace kubectl-1651 deletion completed in 40.690433986s

• [SLOW TEST:71.400 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:33:40.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 12 13:33:41.558: INFO: created pod pod-service-account-defaultsa
Feb 12 13:33:41.559: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 12 13:33:41.573: INFO: created pod pod-service-account-mountsa
Feb 12 13:33:41.574: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 12 13:33:41.707: INFO: created pod pod-service-account-nomountsa
Feb 12 13:33:41.707: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 12 13:33:41.723: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 12 13:33:41.723: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 12 13:33:41.859: INFO: created pod pod-service-account-mountsa-mountspec
Feb 12 13:33:41.859: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 12 13:33:41.930: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 12 13:33:41.930: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 12 13:33:42.114: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 12 13:33:42.114: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 12 13:33:42.143: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 12 13:33:42.143: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 12 13:33:42.208: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 12 13:33:42.208: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:33:42.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1991" for this suite.
Feb 12 13:34:22.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:34:23.134: INFO: namespace svcaccounts-1991 deletion completed in 39.479932903s

• [SLOW TEST:42.209 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:34:23.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b in namespace container-probe-140
Feb 12 13:34:33.380: INFO: Started pod liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b in namespace container-probe-140
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 13:34:33.384: INFO: Initial restart count of pod liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b is 0
Feb 12 13:34:57.632: INFO: Restart count of pod container-probe-140/liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b is now 1 (24.248719127s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:34:57.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-140" for this suite.
Feb 12 13:35:03.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:35:03.956: INFO: namespace container-probe-140 deletion completed in 6.210527241s

• [SLOW TEST:40.822 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:35:03.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 12 13:35:12.128: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 12 13:35:27.317: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:35:27.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6053" for this suite.
Feb 12 13:35:33.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:35:33.523: INFO: namespace pods-6053 deletion completed in 6.187402543s

• [SLOW TEST:29.567 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:35:33.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0212 13:36:03.757348       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 13:36:03.757: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:36:03.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6181" for this suite.
Feb 12 13:36:09.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:36:09.931: INFO: namespace gc-6181 deletion completed in 6.168460117s

• [SLOW TEST:36.408 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:36:09.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b7b1eff1-a32d-4276-b8a8-557f44a57c8f
STEP: Creating a pod to test consume configMaps
Feb 12 13:36:11.375: INFO: Waiting up to 5m0s for pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25" in namespace "configmap-6881" to be "success or failure"
Feb 12 13:36:11.483: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 107.929012ms
Feb 12 13:36:13.527: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152237907s
Feb 12 13:36:15.536: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161546116s
Feb 12 13:36:17.545: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169899329s
Feb 12 13:36:19.558: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183541613s
Feb 12 13:36:21.566: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Running", Reason="", readiness=true. Elapsed: 10.191305755s
Feb 12 13:36:23.576: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.201268027s
STEP: Saw pod success
Feb 12 13:36:23.576: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25" satisfied condition "success or failure"
Feb 12 13:36:23.582: INFO: Trying to get logs from node iruya-node pod pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25 container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:36:23.731: INFO: Waiting for pod pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25 to disappear
Feb 12 13:36:23.739: INFO: Pod pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:36:23.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6881" for this suite.
Feb 12 13:36:29.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:36:29.922: INFO: namespace configmap-6881 deletion completed in 6.173043715s

• [SLOW TEST:19.990 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:36:29.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 12 13:36:30.070: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 12 13:36:30.598: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 12 13:36:33.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:36:35.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:36:37.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:36:39.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 13:36:42.244: INFO: Waited 1.05414277s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:36:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7885" for this suite.
Feb 12 13:36:49.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:36:49.207: INFO: namespace aggregator-7885 deletion completed in 6.353973555s

• [SLOW TEST:19.285 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:36:49.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-839dc37a-3a84-4a06-a8b2-71ca6a7b0f1a
STEP: Creating a pod to test consume secrets
Feb 12 13:36:49.300: INFO: Waiting up to 5m0s for pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58" in namespace "secrets-5858" to be "success or failure"
Feb 12 13:36:49.310: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05665ms
Feb 12 13:36:51.317: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016669806s
Feb 12 13:36:53.325: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0253474s
Feb 12 13:36:55.332: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032618758s
Feb 12 13:36:57.345: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045550022s
Feb 12 13:36:59.353: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053071924s
STEP: Saw pod success
Feb 12 13:36:59.353: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58" satisfied condition "success or failure"
Feb 12 13:36:59.358: INFO: Trying to get logs from node iruya-node pod pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58 container secret-volume-test: 
STEP: delete the pod
Feb 12 13:36:59.425: INFO: Waiting for pod pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58 to disappear
Feb 12 13:36:59.435: INFO: Pod pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:36:59.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5858" for this suite.
Feb 12 13:37:05.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:37:05.657: INFO: namespace secrets-5858 deletion completed in 6.212891139s

• [SLOW TEST:16.450 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:37:05.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 12 13:37:05.795: INFO: Waiting up to 5m0s for pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34" in namespace "downward-api-3465" to be "success or failure"
Feb 12 13:37:05.816: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 21.049202ms
Feb 12 13:37:07.833: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037246895s
Feb 12 13:37:09.865: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069233724s
Feb 12 13:37:11.881: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085684301s
Feb 12 13:37:13.901: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105356501s
Feb 12 13:37:15.909: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113139702s
STEP: Saw pod success
Feb 12 13:37:15.909: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34" satisfied condition "success or failure"
Feb 12 13:37:15.913: INFO: Trying to get logs from node iruya-node pod downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34 container dapi-container: 
STEP: delete the pod
Feb 12 13:37:15.985: INFO: Waiting for pod downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34 to disappear
Feb 12 13:37:15.991: INFO: Pod downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:37:15.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3465" for this suite.
Feb 12 13:37:22.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:37:22.172: INFO: namespace downward-api-3465 deletion completed in 6.174525189s

• [SLOW TEST:16.515 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:37:22.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-2a3ed00d-b5c7-4905-ab4d-89e4c53a856d
STEP: Creating a pod to test consume configMaps
Feb 12 13:37:22.357: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f" in namespace "configmap-7367" to be "success or failure"
Feb 12 13:37:22.394: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.723413ms
Feb 12 13:37:24.401: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043888175s
Feb 12 13:37:26.426: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069052924s
Feb 12 13:37:28.431: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073924072s
Feb 12 13:37:30.440: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082974225s
STEP: Saw pod success
Feb 12 13:37:30.440: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f" satisfied condition "success or failure"
Feb 12 13:37:30.446: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:37:30.584: INFO: Waiting for pod pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f to disappear
Feb 12 13:37:30.593: INFO: Pod pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:37:30.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7367" for this suite.
Feb 12 13:37:36.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:37:36.734: INFO: namespace configmap-7367 deletion completed in 6.133802708s

• [SLOW TEST:14.561 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:37:36.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 12 13:37:48.061: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:37:48.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9298" for this suite.
Feb 12 13:37:54.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:37:54.415: INFO: namespace container-runtime-9298 deletion completed in 6.1484855s

• [SLOW TEST:17.681 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:37:54.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 13:37:54.618: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e4feee08-4392-471e-9361-91479838a997", Controller:(*bool)(0xc0015d663a), BlockOwnerDeletion:(*bool)(0xc0015d663b)}}
Feb 12 13:37:54.638: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7903ff4a-faed-403e-ab5a-672dea838f3e", Controller:(*bool)(0xc002b8e73a), BlockOwnerDeletion:(*bool)(0xc002b8e73b)}}
Feb 12 13:37:54.775: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2bad35f1-0325-470d-93e6-ff0683bac86b", Controller:(*bool)(0xc0015d67fa), BlockOwnerDeletion:(*bool)(0xc0015d67fb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:37:59.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7290" for this suite.
Feb 12 13:38:05.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:38:06.059: INFO: namespace gc-7290 deletion completed in 6.188794907s

• [SLOW TEST:11.644 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:38:06.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 12 13:38:06.195: INFO: Waiting up to 5m0s for pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898" in namespace "emptydir-7639" to be "success or failure"
Feb 12 13:38:06.222: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 25.995659ms
Feb 12 13:38:08.232: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036202417s
Feb 12 13:38:10.242: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045835698s
Feb 12 13:38:12.260: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06431553s
Feb 12 13:38:14.273: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077819943s
Feb 12 13:38:16.282: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086228302s
STEP: Saw pod success
Feb 12 13:38:16.282: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898" satisfied condition "success or failure"
Feb 12 13:38:16.288: INFO: Trying to get logs from node iruya-node pod pod-468e4b5c-15e7-4945-b6ed-764633c57898 container test-container: 
STEP: delete the pod
Feb 12 13:38:16.546: INFO: Waiting for pod pod-468e4b5c-15e7-4945-b6ed-764633c57898 to disappear
Feb 12 13:38:16.566: INFO: Pod pod-468e4b5c-15e7-4945-b6ed-764633c57898 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:38:16.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7639" for this suite.
Feb 12 13:38:22.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:38:22.877: INFO: namespace emptydir-7639 deletion completed in 6.302622207s

• [SLOW TEST:16.818 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:38:22.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-64e92925-fce4-4779-bf7e-bea91e9484fe
STEP: Creating a pod to test consume secrets
Feb 12 13:38:23.007: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b" in namespace "projected-5588" to be "success or failure"
Feb 12 13:38:23.027: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.989357ms
Feb 12 13:38:25.297: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290125494s
Feb 12 13:38:27.307: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299256575s
Feb 12 13:38:29.315: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30748499s
Feb 12 13:38:31.323: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31599672s
Feb 12 13:38:33.332: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324359049s
STEP: Saw pod success
Feb 12 13:38:33.332: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b" satisfied condition "success or failure"
Feb 12 13:38:33.335: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 13:38:33.410: INFO: Waiting for pod pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b to disappear
Feb 12 13:38:33.450: INFO: Pod pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:38:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5588" for this suite.
Feb 12 13:38:39.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:38:39.627: INFO: namespace projected-5588 deletion completed in 6.15832185s

• [SLOW TEST:16.749 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:38:39.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-946965b2-caf2-400b-8968-55b41f81fa61
STEP: Creating a pod to test consume secrets
Feb 12 13:38:39.771: INFO: Waiting up to 5m0s for pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190" in namespace "secrets-6750" to be "success or failure"
Feb 12 13:38:39.778: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626194ms
Feb 12 13:38:41.798: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026713956s
Feb 12 13:38:43.815: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044030261s
Feb 12 13:38:45.824: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053128921s
Feb 12 13:38:47.832: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060550379s
Feb 12 13:38:49.839: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067479517s
STEP: Saw pod success
Feb 12 13:38:49.839: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190" satisfied condition "success or failure"
Feb 12 13:38:49.841: INFO: Trying to get logs from node iruya-node pod pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190 container secret-volume-test: 
STEP: delete the pod
Feb 12 13:38:50.296: INFO: Waiting for pod pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190 to disappear
Feb 12 13:38:50.404: INFO: Pod pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:38:50.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6750" for this suite.
Feb 12 13:38:56.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:38:56.644: INFO: namespace secrets-6750 deletion completed in 6.230376439s

• [SLOW TEST:17.017 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:38:56.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6206/secret-test-df04230b-add1-4842-bfbb-b51e0506648f
STEP: Creating a pod to test consume secrets
Feb 12 13:38:56.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1" in namespace "secrets-6206" to be "success or failure"
Feb 12 13:38:56.810: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.341014ms
Feb 12 13:38:58.817: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013515998s
Feb 12 13:39:00.825: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022337306s
Feb 12 13:39:02.867: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063500617s
Feb 12 13:39:04.876: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073396494s
Feb 12 13:39:06.886: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082764403s
STEP: Saw pod success
Feb 12 13:39:06.886: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1" satisfied condition "success or failure"
Feb 12 13:39:06.889: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1 container env-test: 
STEP: delete the pod
Feb 12 13:39:07.012: INFO: Waiting for pod pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1 to disappear
Feb 12 13:39:07.032: INFO: Pod pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:39:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6206" for this suite.
Feb 12 13:39:13.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:39:13.153: INFO: namespace secrets-6206 deletion completed in 6.11522247s

• [SLOW TEST:16.508 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:39:13.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 12 13:39:13.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 12 13:39:15.502: INFO: stderr: ""
Feb 12 13:39:15.502: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:39:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4276" for this suite.
Feb 12 13:39:21.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:39:21.659: INFO: namespace kubectl-4276 deletion completed in 6.148835711s

• [SLOW TEST:8.506 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:39:21.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-a443f14a-28a5-4dbb-9f38-586fcfb09d15
STEP: Creating a pod to test consume configMaps
Feb 12 13:39:21.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0" in namespace "projected-53" to be "success or failure"
Feb 12 13:39:21.822: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.169605ms
Feb 12 13:39:23.839: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032837657s
Feb 12 13:39:25.853: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046246932s
Feb 12 13:39:27.865: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058087195s
Feb 12 13:39:29.874: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067094613s
Feb 12 13:39:31.883: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076915115s
STEP: Saw pod success
Feb 12 13:39:31.884: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0" satisfied condition "success or failure"
Feb 12 13:39:31.886: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 13:39:31.936: INFO: Waiting for pod pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0 to disappear
Feb 12 13:39:31.948: INFO: Pod pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:39:31.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-53" for this suite.
Feb 12 13:39:38.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:39:38.230: INFO: namespace projected-53 deletion completed in 6.261531286s

• [SLOW TEST:16.571 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:39:38.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 12 13:39:38.388: INFO: Waiting up to 5m0s for pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01" in namespace "containers-3330" to be "success or failure"
Feb 12 13:39:38.408: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 20.406503ms
Feb 12 13:39:40.424: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036141728s
Feb 12 13:39:42.431: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043756219s
Feb 12 13:39:44.446: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058217787s
Feb 12 13:39:46.462: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073992837s
Feb 12 13:39:48.472: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084249598s
STEP: Saw pod success
Feb 12 13:39:48.472: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01" satisfied condition "success or failure"
Feb 12 13:39:48.477: INFO: Trying to get logs from node iruya-node pod client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01 container test-container: 
STEP: delete the pod
Feb 12 13:39:48.623: INFO: Waiting for pod client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01 to disappear
Feb 12 13:39:48.628: INFO: Pod client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:39:48.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3330" for this suite.
Feb 12 13:39:54.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:39:54.829: INFO: namespace containers-3330 deletion completed in 6.195288562s

• [SLOW TEST:16.599 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:39:54.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:40:06.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7120" for this suite.
Feb 12 13:40:28.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:40:28.269: INFO: namespace replication-controller-7120 deletion completed in 22.224605815s

• [SLOW TEST:33.440 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:40:28.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:41:28.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2458" for this suite.
Feb 12 13:41:50.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:41:50.518: INFO: namespace container-probe-2458 deletion completed in 22.12566744s

• [SLOW TEST:82.249 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:41:50.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 12 13:41:50.705: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 13:41:50.758: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 13:41:50.761: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 12 13:41:50.784: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 12 13:41:50.785: INFO: 	Container weave ready: true, restart count 0
Feb 12 13:41:50.785: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 13:41:50.785: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.785: INFO: 	Container kube-bench ready: false, restart count 0
Feb 12 13:41:50.785: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.785: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 13:41:50.785: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 12 13:41:50.886: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container etcd ready: true, restart count 0
Feb 12 13:41:50.886: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container weave ready: true, restart count 0
Feb 12 13:41:50.886: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 13:41:50.886: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container coredns ready: true, restart count 0
Feb 12 13:41:50.886: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 12 13:41:50.886: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 13:41:50.886: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 12 13:41:50.886: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 12 13:41:50.886: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 12 13:41:50.886: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f2ab6ea958f335], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:41:51.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4809" for this suite.
Feb 12 13:41:57.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:41:58.075: INFO: namespace sched-pred-4809 deletion completed in 6.125897306s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.554 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:41:58.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5689
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 13:41:58.235: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 13:42:36.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5689 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:42:36.497: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:42:36.584544       8 log.go:172] (0xc000ccf600) (0xc0025de140) Create stream
I0212 13:42:36.584723       8 log.go:172] (0xc000ccf600) (0xc0025de140) Stream added, broadcasting: 1
I0212 13:42:36.603817       8 log.go:172] (0xc000ccf600) Reply frame received for 1
I0212 13:42:36.603942       8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Create stream
I0212 13:42:36.603954       8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Stream added, broadcasting: 3
I0212 13:42:36.628440       8 log.go:172] (0xc000ccf600) Reply frame received for 3
I0212 13:42:36.628481       8 log.go:172] (0xc000ccf600) (0xc0025de280) Create stream
I0212 13:42:36.628495       8 log.go:172] (0xc000ccf600) (0xc0025de280) Stream added, broadcasting: 5
I0212 13:42:36.630896       8 log.go:172] (0xc000ccf600) Reply frame received for 5
I0212 13:42:36.844850       8 log.go:172] (0xc000ccf600) Data frame received for 3
I0212 13:42:36.844918       8 log.go:172] (0xc0025de1e0) (3) Data frame handling
I0212 13:42:36.844945       8 log.go:172] (0xc0025de1e0) (3) Data frame sent
I0212 13:42:37.015690       8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Stream removed, broadcasting: 3
I0212 13:42:37.015920       8 log.go:172] (0xc000ccf600) Data frame received for 1
I0212 13:42:37.016094       8 log.go:172] (0xc000ccf600) (0xc0025de280) Stream removed, broadcasting: 5
I0212 13:42:37.016154       8 log.go:172] (0xc0025de140) (1) Data frame handling
I0212 13:42:37.016196       8 log.go:172] (0xc0025de140) (1) Data frame sent
I0212 13:42:37.016209       8 log.go:172] (0xc000ccf600) (0xc0025de140) Stream removed, broadcasting: 1
I0212 13:42:37.016257       8 log.go:172] (0xc000ccf600) Go away received
I0212 13:42:37.016670       8 log.go:172] (0xc000ccf600) (0xc0025de140) Stream removed, broadcasting: 1
I0212 13:42:37.016728       8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Stream removed, broadcasting: 3
I0212 13:42:37.016744       8 log.go:172] (0xc000ccf600) (0xc0025de280) Stream removed, broadcasting: 5
Feb 12 13:42:37.016: INFO: Waiting for endpoints: map[]
Feb 12 13:42:37.029: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5689 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:42:37.029: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:42:37.099627       8 log.go:172] (0xc0028182c0) (0xc0016694a0) Create stream
I0212 13:42:37.099679       8 log.go:172] (0xc0028182c0) (0xc0016694a0) Stream added, broadcasting: 1
I0212 13:42:37.107926       8 log.go:172] (0xc0028182c0) Reply frame received for 1
I0212 13:42:37.108023       8 log.go:172] (0xc0028182c0) (0xc0025de320) Create stream
I0212 13:42:37.108037       8 log.go:172] (0xc0028182c0) (0xc0025de320) Stream added, broadcasting: 3
I0212 13:42:37.109646       8 log.go:172] (0xc0028182c0) Reply frame received for 3
I0212 13:42:37.109666       8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Create stream
I0212 13:42:37.109671       8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Stream added, broadcasting: 5
I0212 13:42:37.110815       8 log.go:172] (0xc0028182c0) Reply frame received for 5
I0212 13:42:37.220834       8 log.go:172] (0xc0028182c0) Data frame received for 3
I0212 13:42:37.220990       8 log.go:172] (0xc0025de320) (3) Data frame handling
I0212 13:42:37.221049       8 log.go:172] (0xc0025de320) (3) Data frame sent
I0212 13:42:37.378371       8 log.go:172] (0xc0028182c0) (0xc0025de320) Stream removed, broadcasting: 3
I0212 13:42:37.378650       8 log.go:172] (0xc0028182c0) Data frame received for 1
I0212 13:42:37.378694       8 log.go:172] (0xc0016694a0) (1) Data frame handling
I0212 13:42:37.378724       8 log.go:172] (0xc0016694a0) (1) Data frame sent
I0212 13:42:37.378757       8 log.go:172] (0xc0028182c0) (0xc0016694a0) Stream removed, broadcasting: 1
I0212 13:42:37.378898       8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Stream removed, broadcasting: 5
I0212 13:42:37.378961       8 log.go:172] (0xc0028182c0) Go away received
I0212 13:42:37.379029       8 log.go:172] (0xc0028182c0) (0xc0016694a0) Stream removed, broadcasting: 1
I0212 13:42:37.379047       8 log.go:172] (0xc0028182c0) (0xc0025de320) Stream removed, broadcasting: 3
I0212 13:42:37.379061       8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Stream removed, broadcasting: 5
Feb 12 13:42:37.379: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:42:37.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5689" for this suite.
Feb 12 13:43:01.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:43:01.590: INFO: namespace pod-network-test-5689 deletion completed in 24.20064005s

• [SLOW TEST:63.515 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:43:01.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 12 13:43:10.840: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:43:10.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3457" for this suite.
Feb 12 13:43:18.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:43:19.070: INFO: namespace container-runtime-3457 deletion completed in 8.143446577s

• [SLOW TEST:17.480 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:43:19.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:43:19.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a" in namespace "projected-3598" to be "success or failure"
Feb 12 13:43:19.214: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.244646ms
Feb 12 13:43:21.223: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017786776s
Feb 12 13:43:23.228: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022925437s
Feb 12 13:43:25.237: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031853136s
Feb 12 13:43:27.245: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040152209s
Feb 12 13:43:29.252: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.047335049s
STEP: Saw pod success
Feb 12 13:43:29.252: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a" satisfied condition "success or failure"
Feb 12 13:43:29.258: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a container client-container: 
STEP: delete the pod
Feb 12 13:43:29.428: INFO: Waiting for pod downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a to disappear
Feb 12 13:43:29.437: INFO: Pod downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:43:29.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3598" for this suite.
Feb 12 13:43:35.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:43:35.667: INFO: namespace projected-3598 deletion completed in 6.221724949s

• [SLOW TEST:16.596 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:43:35.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 12 13:43:59.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 13:43:59.934: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 13:44:01.934: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 13:44:01.950: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 13:44:03.934: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 13:44:03.975: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 13:44:05.934: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 13:44:05.944: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:44:05.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6812" for this suite.
Feb 12 13:44:28.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:44:28.136: INFO: namespace container-lifecycle-hook-6812 deletion completed in 22.146788953s

• [SLOW TEST:52.469 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:44:28.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 13:44:28.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 12 13:44:28.389: INFO: stderr: ""
Feb 12 13:44:28.389: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:44:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-396" for this suite.
Feb 12 13:44:34.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:44:34.561: INFO: namespace kubectl-396 deletion completed in 6.161243777s

• [SLOW TEST:6.425 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:44:34.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 12 13:44:34.635: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:44:49.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2806" for this suite.
Feb 12 13:44:55.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:44:55.664: INFO: namespace init-container-2806 deletion completed in 6.201743272s

• [SLOW TEST:21.103 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:44:55.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0212 13:45:06.171457       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 13:45:06.171: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:45:06.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6955" for this suite.
Feb 12 13:45:12.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:45:12.646: INFO: namespace gc-6955 deletion completed in 6.467689331s

• [SLOW TEST:16.981 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:45:12.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-59d06856-ce50-4446-9df5-4d13ac161eab
STEP: Creating a pod to test consume secrets
Feb 12 13:45:12.803: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29" in namespace "projected-5668" to be "success or failure"
Feb 12 13:45:12.885: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 81.921256ms
Feb 12 13:45:14.898: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095000803s
Feb 12 13:45:16.908: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104328887s
Feb 12 13:45:18.918: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114167149s
Feb 12 13:45:20.926: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122729628s
STEP: Saw pod success
Feb 12 13:45:20.926: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29" satisfied condition "success or failure"
Feb 12 13:45:20.929: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29 container secret-volume-test: 
STEP: delete the pod
Feb 12 13:45:21.027: INFO: Waiting for pod pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29 to disappear
Feb 12 13:45:21.147: INFO: Pod pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:45:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5668" for this suite.
Feb 12 13:45:27.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:45:27.407: INFO: namespace projected-5668 deletion completed in 6.250170524s

• [SLOW TEST:14.759 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:45:27.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 13:45:27.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5662'
Feb 12 13:45:27.692: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 13:45:27.692: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 12 13:45:27.739: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-n95vm]
Feb 12 13:45:27.739: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-n95vm" in namespace "kubectl-5662" to be "running and ready"
Feb 12 13:45:27.811: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 72.400403ms
Feb 12 13:45:29.881: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142040287s
Feb 12 13:45:31.918: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178909752s
Feb 12 13:45:33.936: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196856304s
Feb 12 13:45:35.945: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Running", Reason="", readiness=true. Elapsed: 8.206367343s
Feb 12 13:45:35.946: INFO: Pod "e2e-test-nginx-rc-n95vm" satisfied condition "running and ready"
Feb 12 13:45:35.946: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-n95vm]
Feb 12 13:45:35.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5662'
Feb 12 13:45:36.144: INFO: stderr: ""
Feb 12 13:45:36.144: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 12 13:45:36.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5662'
Feb 12 13:45:36.253: INFO: stderr: ""
Feb 12 13:45:36.254: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:45:36.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5662" for this suite.
Feb 12 13:46:00.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:46:00.408: INFO: namespace kubectl-5662 deletion completed in 24.151161308s

• [SLOW TEST:33.002 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:46:00.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-835819a4-445a-42a7-984b-6674928af6a9
STEP: Creating secret with name s-test-opt-upd-f4b0ad67-7ccf-4468-a90c-e109cec892c6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-835819a4-445a-42a7-984b-6674928af6a9
STEP: Updating secret s-test-opt-upd-f4b0ad67-7ccf-4468-a90c-e109cec892c6
STEP: Creating secret with name s-test-opt-create-4e3095a0-a928-4004-996d-bfe679d45d2f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:47:22.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6833" for this suite.
Feb 12 13:47:44.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:47:44.330: INFO: namespace projected-6833 deletion completed in 22.168556536s

• [SLOW TEST:103.921 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:47:44.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-bb523f1b-efa1-4a38-81dd-63a7b8bde530
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-bb523f1b-efa1-4a38-81dd-63a7b8bde530
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:49:08.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6995" for this suite.
Feb 12 13:49:30.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:49:30.667: INFO: namespace configmap-6995 deletion completed in 22.151992904s

• [SLOW TEST:106.337 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:49:30.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:49:30.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c" in namespace "projected-7699" to be "success or failure"
Feb 12 13:49:30.810: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.599521ms
Feb 12 13:49:32.824: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036744047s
Feb 12 13:49:34.831: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044347367s
Feb 12 13:49:36.843: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056000429s
Feb 12 13:49:38.864: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077433159s
Feb 12 13:49:40.893: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Running", Reason="", readiness=true. Elapsed: 10.10615198s
Feb 12 13:49:43.048: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.261104281s
STEP: Saw pod success
Feb 12 13:49:43.048: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c" satisfied condition "success or failure"
Feb 12 13:49:43.059: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c container client-container: 
STEP: delete the pod
Feb 12 13:49:43.536: INFO: Waiting for pod downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c to disappear
Feb 12 13:49:43.542: INFO: Pod downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:49:43.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7699" for this suite.
Feb 12 13:49:49.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:49:49.663: INFO: namespace projected-7699 deletion completed in 6.115518762s

• [SLOW TEST:18.995 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:49:49.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-mf7c
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 13:49:49.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mf7c" in namespace "subpath-6062" to be "success or failure"
Feb 12 13:49:49.842: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.279785ms
Feb 12 13:49:51.853: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045220212s
Feb 12 13:49:53.877: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068988568s
Feb 12 13:49:55.891: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083725817s
Feb 12 13:49:57.919: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110834491s
Feb 12 13:49:59.927: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 10.11888839s
Feb 12 13:50:01.936: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 12.128044111s
Feb 12 13:50:03.950: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 14.142075195s
Feb 12 13:50:05.959: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 16.151156691s
Feb 12 13:50:07.966: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 18.158049845s
Feb 12 13:50:09.974: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 20.165920994s
Feb 12 13:50:11.983: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 22.174969821s
Feb 12 13:50:13.990: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 24.182532604s
Feb 12 13:50:15.997: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 26.189585481s
Feb 12 13:50:18.006: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 28.197934515s
Feb 12 13:50:20.013: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.204849795s
STEP: Saw pod success
Feb 12 13:50:20.013: INFO: Pod "pod-subpath-test-configmap-mf7c" satisfied condition "success or failure"
Feb 12 13:50:20.017: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-mf7c container test-container-subpath-configmap-mf7c: 
STEP: delete the pod
Feb 12 13:50:20.376: INFO: Waiting for pod pod-subpath-test-configmap-mf7c to disappear
Feb 12 13:50:20.394: INFO: Pod pod-subpath-test-configmap-mf7c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mf7c
Feb 12 13:50:20.394: INFO: Deleting pod "pod-subpath-test-configmap-mf7c" in namespace "subpath-6062"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:50:20.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6062" for this suite.
Feb 12 13:50:26.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:50:26.575: INFO: namespace subpath-6062 deletion completed in 6.168194891s

• [SLOW TEST:36.912 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:50:26.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-9f59ebdb-e73f-4d93-949a-e46fd206b9d1
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:50:26.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1657" for this suite.
Feb 12 13:50:32.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:50:32.897: INFO: namespace secrets-1657 deletion completed in 6.197401015s

• [SLOW TEST:6.322 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:50:32.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 12 13:50:33.033: INFO: Waiting up to 5m0s for pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540" in namespace "downward-api-369" to be "success or failure"
Feb 12 13:50:33.046: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 13.059184ms
Feb 12 13:50:35.060: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026473351s
Feb 12 13:50:37.068: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035173123s
Feb 12 13:50:39.086: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052625009s
Feb 12 13:50:41.094: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060935297s
STEP: Saw pod success
Feb 12 13:50:41.094: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540" satisfied condition "success or failure"
Feb 12 13:50:41.098: INFO: Trying to get logs from node iruya-node pod downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540 container dapi-container: 
STEP: delete the pod
Feb 12 13:50:41.199: INFO: Waiting for pod downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540 to disappear
Feb 12 13:50:41.207: INFO: Pod downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:50:41.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-369" for this suite.
Feb 12 13:50:47.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:50:47.383: INFO: namespace downward-api-369 deletion completed in 6.170217211s

• [SLOW TEST:14.485 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:50:47.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 12 13:50:47.616: INFO: Waiting up to 5m0s for pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702" in namespace "downward-api-5938" to be "success or failure"
Feb 12 13:50:47.753: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 136.957409ms
Feb 12 13:50:49.761: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14477196s
Feb 12 13:50:51.769: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152260719s
Feb 12 13:50:53.781: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164504031s
Feb 12 13:50:55.799: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182756223s
Feb 12 13:50:57.809: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192959602s
STEP: Saw pod success
Feb 12 13:50:57.810: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702" satisfied condition "success or failure"
Feb 12 13:50:57.818: INFO: Trying to get logs from node iruya-node pod downward-api-8c858265-d791-4aea-9b93-6d059d906702 container dapi-container: 
STEP: delete the pod
Feb 12 13:50:58.094: INFO: Waiting for pod downward-api-8c858265-d791-4aea-9b93-6d059d906702 to disappear
Feb 12 13:50:58.116: INFO: Pod downward-api-8c858265-d791-4aea-9b93-6d059d906702 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:50:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5938" for this suite.
Feb 12 13:51:04.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:51:04.439: INFO: namespace downward-api-5938 deletion completed in 6.223950304s

• [SLOW TEST:17.056 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:51:04.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 12 13:51:04.510: INFO: Waiting up to 5m0s for pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25" in namespace "var-expansion-7863" to be "success or failure"
Feb 12 13:51:04.525: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.849904ms
Feb 12 13:51:06.536: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026174776s
Feb 12 13:51:08.547: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037303832s
Feb 12 13:51:10.558: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047540888s
Feb 12 13:51:12.580: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069875357s
STEP: Saw pod success
Feb 12 13:51:12.580: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25" satisfied condition "success or failure"
Feb 12 13:51:12.592: INFO: Trying to get logs from node iruya-node pod var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25 container dapi-container: 
STEP: delete the pod
Feb 12 13:51:12.755: INFO: Waiting for pod var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25 to disappear
Feb 12 13:51:12.760: INFO: Pod var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:51:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7863" for this suite.
Feb 12 13:51:18.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:51:19.009: INFO: namespace var-expansion-7863 deletion completed in 6.241668632s

• [SLOW TEST:14.569 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:51:19.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 12 13:51:19.158: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:51:19.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-419" for this suite.
Feb 12 13:51:25.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:51:25.462: INFO: namespace kubectl-419 deletion completed in 6.16765218s

• [SLOW TEST:6.453 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:51:25.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-762b1a5b-5659-4bc3-a50b-0e0f24bd068d
STEP: Creating secret with name secret-projected-all-test-volume-cb3f00da-6027-44a2-b223-5ef909a6c7cf
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 12 13:51:25.594: INFO: Waiting up to 5m0s for pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61" in namespace "projected-4504" to be "success or failure"
Feb 12 13:51:25.599: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 5.481183ms
Feb 12 13:51:27.609: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015466961s
Feb 12 13:51:29.618: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023754027s
Feb 12 13:51:31.625: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031300275s
Feb 12 13:51:33.641: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047600201s
STEP: Saw pod success
Feb 12 13:51:33.642: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61" satisfied condition "success or failure"
Feb 12 13:51:33.647: INFO: Trying to get logs from node iruya-node pod projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61 container projected-all-volume-test: 
STEP: delete the pod
Feb 12 13:51:33.763: INFO: Waiting for pod projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61 to disappear
Feb 12 13:51:33.767: INFO: Pod projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:51:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4504" for this suite.
Feb 12 13:51:39.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:51:39.998: INFO: namespace projected-4504 deletion completed in 6.217135535s

• [SLOW TEST:14.536 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:51:39.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 12 13:51:40.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7280'
Feb 12 13:51:42.562: INFO: stderr: ""
Feb 12 13:51:42.562: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 12 13:51:43.635: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:43.636: INFO: Found 0 / 1
Feb 12 13:51:44.579: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:44.579: INFO: Found 0 / 1
Feb 12 13:51:45.573: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:45.573: INFO: Found 0 / 1
Feb 12 13:51:46.580: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:46.580: INFO: Found 0 / 1
Feb 12 13:51:47.770: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:47.770: INFO: Found 0 / 1
Feb 12 13:51:48.673: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:48.674: INFO: Found 0 / 1
Feb 12 13:51:49.696: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:49.696: INFO: Found 0 / 1
Feb 12 13:51:50.585: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:50.585: INFO: Found 0 / 1
Feb 12 13:51:51.589: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:51.589: INFO: Found 0 / 1
Feb 12 13:51:52.582: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:52.583: INFO: Found 1 / 1
Feb 12 13:51:52.583: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 12 13:51:52.588: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:52.588: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 12 13:51:52.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6pztl --namespace=kubectl-7280 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 12 13:51:52.777: INFO: stderr: ""
Feb 12 13:51:52.777: INFO: stdout: "pod/redis-master-6pztl patched\n"
STEP: checking annotations
Feb 12 13:51:52.792: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 13:51:52.792: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:51:52.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7280" for this suite.
Feb 12 13:52:14.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:52:15.053: INFO: namespace kubectl-7280 deletion completed in 22.256850467s

• [SLOW TEST:35.055 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:52:15.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:52:15.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54" in namespace "downward-api-90" to be "success or failure"
Feb 12 13:52:15.153: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08489ms
Feb 12 13:52:17.161: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012091085s
Feb 12 13:52:19.167: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018916837s
Feb 12 13:52:21.256: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107654063s
Feb 12 13:52:23.266: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117929524s
Feb 12 13:52:25.274: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125818438s
STEP: Saw pod success
Feb 12 13:52:25.274: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54" satisfied condition "success or failure"
Feb 12 13:52:25.279: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54 container client-container: 
STEP: delete the pod
Feb 12 13:52:25.354: INFO: Waiting for pod downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54 to disappear
Feb 12 13:52:25.358: INFO: Pod downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:52:25.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-90" for this suite.
Feb 12 13:52:31.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:52:31.505: INFO: namespace downward-api-90 deletion completed in 6.139866799s

• [SLOW TEST:16.452 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:52:31.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-240e239b-148f-491d-adec-7f784f344aac in namespace container-probe-901
Feb 12 13:52:41.694: INFO: Started pod test-webserver-240e239b-148f-491d-adec-7f784f344aac in namespace container-probe-901
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 13:52:41.698: INFO: Initial restart count of pod test-webserver-240e239b-148f-491d-adec-7f784f344aac is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:56:43.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-901" for this suite.
Feb 12 13:56:49.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:56:49.398: INFO: namespace container-probe-901 deletion completed in 6.186403543s

• [SLOW TEST:257.893 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:56:49.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 13:57:01.553: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 contains '' instead of 'foo.example.com.'
Feb 12 13:57:01.560: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 contains '' instead of 'foo.example.com.'
Feb 12 13:57:01.560: INFO: Lookups using dns-5760/dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local]

Feb 12 13:57:06.591: INFO: DNS probes using dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 13:57:20.753: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains '' instead of 'bar.example.com.'
Feb 12 13:57:20.772: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains '' instead of 'bar.example.com.'
Feb 12 13:57:20.772: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local]

Feb 12 13:57:25.785: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 13:57:25.791: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 13:57:25.791: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local]

Feb 12 13:57:30.785: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 13:57:30.791: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 13:57:30.791: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local]

Feb 12 13:57:35.791: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 13:57:35.815: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 13:57:35.815: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local]

Feb 12 13:57:40.834: INFO: DNS probes using dns-test-8f74d900-4247-4129-8741-9aca1e18c22c succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 13:57:57.187: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f contains '' instead of '10.96.91.32'
Feb 12 13:57:57.193: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod  dns-5760/dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f contains '' instead of '10.96.91.32'
Feb 12 13:57:57.193: INFO: Lookups using dns-5760/dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local]

Feb 12 13:58:02.218: INFO: DNS probes using dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:58:02.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5760" for this suite.
Feb 12 13:58:10.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:58:10.790: INFO: namespace dns-5760 deletion completed in 8.17390265s

• [SLOW TEST:81.391 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:58:10.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:58:10.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93" in namespace "downward-api-9591" to be "success or failure"
Feb 12 13:58:10.968: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 91.489541ms
Feb 12 13:58:12.980: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102983527s
Feb 12 13:58:14.987: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110533113s
Feb 12 13:58:16.997: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119806737s
Feb 12 13:58:19.035: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158573316s
Feb 12 13:58:21.047: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169777233s
STEP: Saw pod success
Feb 12 13:58:21.047: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93" satisfied condition "success or failure"
Feb 12 13:58:21.051: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93 container client-container: 
STEP: delete the pod
Feb 12 13:58:21.178: INFO: Waiting for pod downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93 to disappear
Feb 12 13:58:21.184: INFO: Pod downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:58:21.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9591" for this suite.
Feb 12 13:58:27.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:58:27.406: INFO: namespace downward-api-9591 deletion completed in 6.218019961s

• [SLOW TEST:16.615 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:58:27.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b700bb16-733d-45da-b886-e84292ec4e35
STEP: Creating a pod to test consume configMaps
Feb 12 13:58:27.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a" in namespace "configmap-7104" to be "success or failure"
Feb 12 13:58:27.627: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.198846ms
Feb 12 13:58:29.635: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025044326s
Feb 12 13:58:31.650: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039880852s
Feb 12 13:58:33.660: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050015817s
Feb 12 13:58:35.668: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05786489s
Feb 12 13:58:37.677: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066893598s
STEP: Saw pod success
Feb 12 13:58:37.677: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a" satisfied condition "success or failure"
Feb 12 13:58:37.683: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:58:37.780: INFO: Waiting for pod pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a to disappear
Feb 12 13:58:37.789: INFO: Pod pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:58:37.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7104" for this suite.
Feb 12 13:58:43.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:58:44.027: INFO: namespace configmap-7104 deletion completed in 6.221018317s

• [SLOW TEST:16.621 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:58:44.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 12 13:58:44.093: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix184574266/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:58:44.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3820" for this suite.
Feb 12 13:58:50.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:58:50.306: INFO: namespace kubectl-3820 deletion completed in 6.144523474s

• [SLOW TEST:6.278 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:58:50.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 12 13:59:02.997: INFO: Successfully updated pod "annotationupdate3eaeacd5-37e5-4a75-a6f5-5b67cbc730fa"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 13:59:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3445" for this suite.
Feb 12 13:59:43.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:59:43.210: INFO: namespace projected-3445 deletion completed in 38.181953183s

• [SLOW TEST:52.903 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 13:59:43.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 12 13:59:44.209: INFO: Pod name wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c: Found 0 pods out of 5
Feb 12 13:59:49.275: INFO: Pod name wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c in namespace emptydir-wrapper-9795, will wait for the garbage collector to delete the pods
Feb 12 14:00:17.396: INFO: Deleting ReplicationController wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c took: 16.108205ms
Feb 12 14:00:17.797: INFO: Terminating ReplicationController wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c pods took: 400.481672ms
STEP: Creating RC which spawns configmap-volume pods
Feb 12 14:01:07.092: INFO: Pod name wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05: Found 0 pods out of 5
Feb 12 14:01:12.104: INFO: Pod name wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05 in namespace emptydir-wrapper-9795, will wait for the garbage collector to delete the pods
Feb 12 14:01:44.218: INFO: Deleting ReplicationController wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05 took: 16.073488ms
Feb 12 14:01:44.618: INFO: Terminating ReplicationController wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05 pods took: 400.524067ms
STEP: Creating RC which spawns configmap-volume pods
Feb 12 14:02:37.061: INFO: Pod name wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8: Found 0 pods out of 5
Feb 12 14:02:42.079: INFO: Pod name wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8 in namespace emptydir-wrapper-9795, will wait for the garbage collector to delete the pods
Feb 12 14:03:12.193: INFO: Deleting ReplicationController wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8 took: 17.193738ms
Feb 12 14:03:12.594: INFO: Terminating ReplicationController wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8 pods took: 400.984523ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:03:58.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9795" for this suite.
Feb 12 14:04:08.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:04:08.560: INFO: namespace emptydir-wrapper-9795 deletion completed in 10.159393556s

• [SLOW TEST:265.349 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:04:08.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-9dd14e01-4f27-4c3e-bc01-0a152d6962a8
STEP: Creating configMap with name cm-test-opt-upd-2f4fddee-827b-41ad-8046-2e66394bfe31
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9dd14e01-4f27-4c3e-bc01-0a152d6962a8
STEP: Updating configmap cm-test-opt-upd-2f4fddee-827b-41ad-8046-2e66394bfe31
STEP: Creating configMap with name cm-test-opt-create-58a45cff-f346-4297-9bd0-b5384f3fc436
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:05:41.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4543" for this suite.
Feb 12 14:06:03.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:06:03.150: INFO: namespace projected-4543 deletion completed in 22.117921094s

• [SLOW TEST:114.590 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:06:03.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6326
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 14:06:03.194: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 14:06:43.963: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6326 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:06:43.963: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:06:44.060288       8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Create stream
I0212 14:06:44.060412       8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Stream added, broadcasting: 1
I0212 14:06:44.069784       8 log.go:172] (0xc000ba2a50) Reply frame received for 1
I0212 14:06:44.069841       8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Create stream
I0212 14:06:44.069853       8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Stream added, broadcasting: 3
I0212 14:06:44.071638       8 log.go:172] (0xc000ba2a50) Reply frame received for 3
I0212 14:06:44.071665       8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Create stream
I0212 14:06:44.071675       8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Stream added, broadcasting: 5
I0212 14:06:44.073949       8 log.go:172] (0xc000ba2a50) Reply frame received for 5
I0212 14:06:44.267168       8 log.go:172] (0xc000ba2a50) Data frame received for 3
I0212 14:06:44.267215       8 log.go:172] (0xc001c75ea0) (3) Data frame handling
I0212 14:06:44.267232       8 log.go:172] (0xc001c75ea0) (3) Data frame sent
I0212 14:06:44.412798       8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Stream removed, broadcasting: 3
I0212 14:06:44.412956       8 log.go:172] (0xc000ba2a50) Data frame received for 1
I0212 14:06:44.413052       8 log.go:172] (0xc00221dea0) (1) Data frame handling
I0212 14:06:44.413073       8 log.go:172] (0xc00221dea0) (1) Data frame sent
I0212 14:06:44.413332       8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Stream removed, broadcasting: 1
I0212 14:06:44.413372       8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Stream removed, broadcasting: 5
I0212 14:06:44.413402       8 log.go:172] (0xc000ba2a50) Go away received
I0212 14:06:44.413521       8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Stream removed, broadcasting: 1
I0212 14:06:44.413537       8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Stream removed, broadcasting: 3
I0212 14:06:44.413547       8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Stream removed, broadcasting: 5
Feb 12 14:06:44.413: INFO: Waiting for endpoints: map[]
Feb 12 14:06:44.420: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6326 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:06:44.420: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:06:44.501801       8 log.go:172] (0xc001b92160) (0xc00038dc20) Create stream
I0212 14:06:44.502031       8 log.go:172] (0xc001b92160) (0xc00038dc20) Stream added, broadcasting: 1
I0212 14:06:44.513956       8 log.go:172] (0xc001b92160) Reply frame received for 1
I0212 14:06:44.514019       8 log.go:172] (0xc001b92160) (0xc002922280) Create stream
I0212 14:06:44.514028       8 log.go:172] (0xc001b92160) (0xc002922280) Stream added, broadcasting: 3
I0212 14:06:44.515797       8 log.go:172] (0xc001b92160) Reply frame received for 3
I0212 14:06:44.515830       8 log.go:172] (0xc001b92160) (0xc0027580a0) Create stream
I0212 14:06:44.515837       8 log.go:172] (0xc001b92160) (0xc0027580a0) Stream added, broadcasting: 5
I0212 14:06:44.521168       8 log.go:172] (0xc001b92160) Reply frame received for 5
I0212 14:06:44.684897       8 log.go:172] (0xc001b92160) Data frame received for 3
I0212 14:06:44.684969       8 log.go:172] (0xc002922280) (3) Data frame handling
I0212 14:06:44.684989       8 log.go:172] (0xc002922280) (3) Data frame sent
I0212 14:06:44.856683       8 log.go:172] (0xc001b92160) (0xc002922280) Stream removed, broadcasting: 3
I0212 14:06:44.856899       8 log.go:172] (0xc001b92160) Data frame received for 1
I0212 14:06:44.856963       8 log.go:172] (0xc00038dc20) (1) Data frame handling
I0212 14:06:44.856980       8 log.go:172] (0xc00038dc20) (1) Data frame sent
I0212 14:06:44.856986       8 log.go:172] (0xc001b92160) (0xc00038dc20) Stream removed, broadcasting: 1
I0212 14:06:44.857787       8 log.go:172] (0xc001b92160) (0xc0027580a0) Stream removed, broadcasting: 5
I0212 14:06:44.857871       8 log.go:172] (0xc001b92160) Go away received
I0212 14:06:44.858050       8 log.go:172] (0xc001b92160) (0xc00038dc20) Stream removed, broadcasting: 1
I0212 14:06:44.858145       8 log.go:172] (0xc001b92160) (0xc002922280) Stream removed, broadcasting: 3
I0212 14:06:44.858156       8 log.go:172] (0xc001b92160) (0xc0027580a0) Stream removed, broadcasting: 5
Feb 12 14:06:44.858: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:06:44.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6326" for this suite.
Feb 12 14:07:08.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:07:09.063: INFO: namespace pod-network-test-6326 deletion completed in 24.182572057s

• [SLOW TEST:65.913 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:07:09.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 12 14:07:09.232: INFO: Waiting up to 5m0s for pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690" in namespace "emptydir-4800" to be "success or failure"
Feb 12 14:07:09.245: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 12.557212ms
Feb 12 14:07:11.250: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017844703s
Feb 12 14:07:13.260: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027823709s
Feb 12 14:07:15.268: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035781002s
Feb 12 14:07:17.277: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044863646s
Feb 12 14:07:19.289: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057399308s
Feb 12 14:07:21.295: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.062556171s
STEP: Saw pod success
Feb 12 14:07:21.295: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690" satisfied condition "success or failure"
Feb 12 14:07:21.299: INFO: Trying to get logs from node iruya-node pod pod-e397c2a1-da5d-4dba-b224-f0240cb59690 container test-container: 
STEP: delete the pod
Feb 12 14:07:21.347: INFO: Waiting for pod pod-e397c2a1-da5d-4dba-b224-f0240cb59690 to disappear
Feb 12 14:07:21.351: INFO: Pod pod-e397c2a1-da5d-4dba-b224-f0240cb59690 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:07:21.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4800" for this suite.
Feb 12 14:07:27.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:07:27.553: INFO: namespace emptydir-4800 deletion completed in 6.167535985s

• [SLOW TEST:18.488 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:07:27.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:07:27.647: INFO: Creating ReplicaSet my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f
Feb 12 14:07:27.710: INFO: Pod name my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f: Found 0 pods out of 1
Feb 12 14:07:32.768: INFO: Pod name my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f: Found 1 pods out of 1
Feb 12 14:07:32.768: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f" is running
Feb 12 14:07:38.786: INFO: Pod "my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f-dhtd9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason: Message:}])
Feb 12 14:07:38.786: INFO: Trying to dial the pod
Feb 12 14:07:43.833: INFO: Controller my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f: Got expected result from replica 1 [my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f-dhtd9]: "my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f-dhtd9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:07:43.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5305" for this suite.
Feb 12 14:07:49.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:07:49.978: INFO: namespace replicaset-5305 deletion completed in 6.137226436s

• [SLOW TEST:22.425 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:07:49.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1bd1303b-9e30-496f-b6fe-7b98df7bd9cd
STEP: Creating a pod to test consume configMaps
Feb 12 14:07:50.108: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad" in namespace "projected-1257" to be "success or failure"
Feb 12 14:07:50.136: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 28.13871ms
Feb 12 14:07:52.152: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043640606s
Feb 12 14:07:54.166: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058185551s
Feb 12 14:07:56.179: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070887199s
Feb 12 14:07:58.186: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07823751s
Feb 12 14:08:00.196: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087424797s
Feb 12 14:08:02.207: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 12.098895869s
Feb 12 14:08:04.218: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.109503084s
STEP: Saw pod success
Feb 12 14:08:04.218: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad" satisfied condition "success or failure"
Feb 12 14:08:04.222: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 14:08:04.280: INFO: Waiting for pod pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad to disappear
Feb 12 14:08:04.388: INFO: Pod pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:08:04.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1257" for this suite.
Feb 12 14:08:10.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:08:10.576: INFO: namespace projected-1257 deletion completed in 6.181262471s

• [SLOW TEST:20.597 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:08:10.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3063.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3063.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 14:08:24.789: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea)
Feb 12 14:08:24.794: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea)
Feb 12 14:08:24.801: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea)
Feb 12 14:08:24.807: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea)
Feb 12 14:08:24.811: INFO: Unable to read jessie_udp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea)
Feb 12 14:08:24.814: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea)
Feb 12 14:08:24.814: INFO: Lookups using dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 12 14:08:29.904: INFO: DNS probes using dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:08:30.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3063" for this suite.
Feb 12 14:08:36.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:08:36.231: INFO: namespace dns-3063 deletion completed in 6.186708519s

• [SLOW TEST:25.653 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:08:36.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 12 14:08:45.076: INFO: Successfully updated pod "pod-update-236f9a43-94df-4b70-8131-cf45e73ab8c7"
STEP: verifying the updated pod is in kubernetes
Feb 12 14:08:45.089: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:08:45.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2115" for this suite.
Feb 12 14:09:07.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:09:07.224: INFO: namespace pods-2115 deletion completed in 22.128779672s

• [SLOW TEST:30.993 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:09:07.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 12 14:09:07.357: INFO: Waiting up to 5m0s for pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba" in namespace "emptydir-1479" to be "success or failure"
Feb 12 14:09:07.419: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 62.117252ms
Feb 12 14:09:09.426: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068904219s
Feb 12 14:09:11.432: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074945814s
Feb 12 14:09:13.442: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084761752s
Feb 12 14:09:15.448: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090386294s
Feb 12 14:09:17.456: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098305279s
STEP: Saw pod success
Feb 12 14:09:17.456: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba" satisfied condition "success or failure"
Feb 12 14:09:17.459: INFO: Trying to get logs from node iruya-node pod pod-106587a8-9a85-485f-a7e3-3656e7bf60ba container test-container: 
STEP: delete the pod
Feb 12 14:09:17.512: INFO: Waiting for pod pod-106587a8-9a85-485f-a7e3-3656e7bf60ba to disappear
Feb 12 14:09:17.521: INFO: Pod pod-106587a8-9a85-485f-a7e3-3656e7bf60ba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:09:17.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1479" for this suite.
Feb 12 14:09:23.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:09:23.684: INFO: namespace emptydir-1479 deletion completed in 6.15457404s

• [SLOW TEST:16.460 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:09:23.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 14:09:23.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda" in namespace "downward-api-4425" to be "success or failure"
Feb 12 14:09:23.852: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 49.78984ms
Feb 12 14:09:25.863: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061041357s
Feb 12 14:09:27.873: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071177887s
Feb 12 14:09:29.880: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078162959s
Feb 12 14:09:31.891: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088734732s
Feb 12 14:09:33.905: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103294383s
STEP: Saw pod success
Feb 12 14:09:33.905: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda" satisfied condition "success or failure"
Feb 12 14:09:33.912: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda container client-container: 
STEP: delete the pod
Feb 12 14:09:34.127: INFO: Waiting for pod downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda to disappear
Feb 12 14:09:34.133: INFO: Pod downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:09:34.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4425" for this suite.
Feb 12 14:09:40.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:09:40.285: INFO: namespace downward-api-4425 deletion completed in 6.146289617s

• [SLOW TEST:16.599 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:09:40.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 12 14:10:04.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:04.535: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:06.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:06.550: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:08.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:08.548: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:10.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:10.565: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:12.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:12.560: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:14.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:14.551: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:16.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:16.564: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 14:10:18.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 14:10:19.030: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:10:19.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-661" for this suite.
Feb 12 14:10:41.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:10:41.223: INFO: namespace container-lifecycle-hook-661 deletion completed in 22.169929042s

• [SLOW TEST:60.937 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:10:41.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-961
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-961
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-961
Feb 12 14:10:41.369: INFO: Found 0 stateful pods, waiting for 1
Feb 12 14:10:51.385: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 12 14:11:03.097: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 12 14:11:03.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 14:11:05.708: INFO: stderr: "I0212 14:11:05.300493    2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Create stream\nI0212 14:11:05.300687    2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Stream added, broadcasting: 1\nI0212 14:11:05.312261    2096 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0212 14:11:05.312468    2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Create stream\nI0212 14:11:05.312519    2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Stream added, broadcasting: 3\nI0212 14:11:05.315493    2096 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0212 14:11:05.315551    2096 log.go:172] (0xc00013a6e0) (0xc000612280) Create stream\nI0212 14:11:05.315571    2096 log.go:172] (0xc00013a6e0) (0xc000612280) Stream added, broadcasting: 5\nI0212 14:11:05.317707    2096 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0212 14:11:05.521360    2096 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0212 14:11:05.521845    2096 log.go:172] (0xc000612280) (5) Data frame handling\nI0212 14:11:05.521910    2096 log.go:172] (0xc000612280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:05.568728    2096 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0212 14:11:05.568889    2096 log.go:172] (0xc00071c000) (3) Data frame handling\nI0212 14:11:05.568917    2096 log.go:172] (0xc00071c000) (3) Data frame sent\nI0212 14:11:05.694138    2096 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0212 14:11:05.694293    2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Stream removed, broadcasting: 3\nI0212 14:11:05.694383    2096 log.go:172] (0xc00002e6e0) (1) Data frame handling\nI0212 14:11:05.694425    2096 log.go:172] (0xc00002e6e0) (1) Data frame sent\nI0212 14:11:05.694803    2096 log.go:172] (0xc00013a6e0) (0xc000612280) Stream removed, broadcasting: 5\nI0212 14:11:05.694922    2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Stream removed, broadcasting: 1\nI0212 14:11:05.694970    2096 log.go:172] (0xc00013a6e0) Go away received\nI0212 14:11:05.696655    2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Stream removed, broadcasting: 1\nI0212 14:11:05.696679    2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Stream removed, broadcasting: 3\nI0212 14:11:05.696696    2096 log.go:172] (0xc00013a6e0) (0xc000612280) Stream removed, broadcasting: 5\n"
Feb 12 14:11:05.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 14:11:05.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 14:11:05.717: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 12 14:11:15.726: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 14:11:15.726: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 14:11:15.756: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 12 14:11:15.756: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:15.756: INFO: 
Feb 12 14:11:15.756: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 12 14:11:17.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984265376s
Feb 12 14:11:18.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.421111993s
Feb 12 14:11:19.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.401023816s
Feb 12 14:11:20.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.29166103s
Feb 12 14:11:23.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.28183694s
Feb 12 14:11:24.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.351032021s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-961
Feb 12 14:11:26.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:11:26.815: INFO: stderr: "I0212 14:11:26.231632    2122 log.go:172] (0xc0007920b0) (0xc000696640) Create stream\nI0212 14:11:26.232059    2122 log.go:172] (0xc0007920b0) (0xc000696640) Stream added, broadcasting: 1\nI0212 14:11:26.242474    2122 log.go:172] (0xc0007920b0) Reply frame received for 1\nI0212 14:11:26.242607    2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Create stream\nI0212 14:11:26.242636    2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Stream added, broadcasting: 3\nI0212 14:11:26.244867    2122 log.go:172] (0xc0007920b0) Reply frame received for 3\nI0212 14:11:26.244905    2122 log.go:172] (0xc0007920b0) (0xc00071c000) Create stream\nI0212 14:11:26.244917    2122 log.go:172] (0xc0007920b0) (0xc00071c000) Stream added, broadcasting: 5\nI0212 14:11:26.247102    2122 log.go:172] (0xc0007920b0) Reply frame received for 5\nI0212 14:11:26.609842    2122 log.go:172] (0xc0007920b0) Data frame received for 5\nI0212 14:11:26.610217    2122 log.go:172] (0xc00071c000) (5) Data frame handling\nI0212 14:11:26.610271    2122 log.go:172] (0xc00071c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 14:11:26.610322    2122 log.go:172] (0xc0007920b0) Data frame received for 3\nI0212 14:11:26.610334    2122 log.go:172] (0xc0006001e0) (3) Data frame handling\nI0212 14:11:26.610353    2122 log.go:172] (0xc0006001e0) (3) Data frame sent\nI0212 14:11:26.806058    2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Stream removed, broadcasting: 3\nI0212 14:11:26.806218    2122 log.go:172] (0xc0007920b0) Data frame received for 1\nI0212 14:11:26.806233    2122 log.go:172] (0xc0007920b0) (0xc00071c000) Stream removed, broadcasting: 5\nI0212 14:11:26.806360    2122 log.go:172] (0xc000696640) (1) Data frame handling\nI0212 14:11:26.806396    2122 log.go:172] (0xc000696640) (1) Data frame sent\nI0212 14:11:26.806409    2122 log.go:172] (0xc0007920b0) (0xc000696640) Stream removed, broadcasting: 1\nI0212 14:11:26.806432    2122 log.go:172] (0xc0007920b0) Go away received\nI0212 14:11:26.807401    2122 log.go:172] (0xc0007920b0) (0xc000696640) Stream removed, broadcasting: 1\nI0212 14:11:26.807424    2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Stream removed, broadcasting: 3\nI0212 14:11:26.807433    2122 log.go:172] (0xc0007920b0) (0xc00071c000) Stream removed, broadcasting: 5\n"
Feb 12 14:11:26.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 14:11:26.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 14:11:26.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:11:27.111: INFO: stderr: "I0212 14:11:26.959479    2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Create stream\nI0212 14:11:26.959672    2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Stream added, broadcasting: 1\nI0212 14:11:26.965601    2137 log.go:172] (0xc0007e8420) Reply frame received for 1\nI0212 14:11:26.965659    2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Create stream\nI0212 14:11:26.965666    2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Stream added, broadcasting: 3\nI0212 14:11:26.966612    2137 log.go:172] (0xc0007e8420) Reply frame received for 3\nI0212 14:11:26.966638    2137 log.go:172] (0xc0007e8420) (0xc0007be000) Create stream\nI0212 14:11:26.966653    2137 log.go:172] (0xc0007e8420) (0xc0007be000) Stream added, broadcasting: 5\nI0212 14:11:26.967503    2137 log.go:172] (0xc0007e8420) Reply frame received for 5\nI0212 14:11:27.027386    2137 log.go:172] (0xc0007e8420) Data frame received for 5\nI0212 14:11:27.027517    2137 log.go:172] (0xc0007be000) (5) Data frame handling\nI0212 14:11:27.027558    2137 log.go:172] (0xc0007be000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 14:11:27.027877    2137 log.go:172] (0xc0007e8420) Data frame received for 5\nI0212 14:11:27.027937    2137 log.go:172] (0xc0007be000) (5) Data frame handling\nI0212 14:11:27.027964    2137 log.go:172] (0xc0007be000) (5) Data frame sent\nI0212 14:11:27.027996    2137 log.go:172] (0xc0007e8420) Data frame received for 5\nI0212 14:11:27.028027    2137 log.go:172] (0xc0007be000) (5) Data frame handling\nI0212 14:11:27.028050    2137 log.go:172] (0xc0007e8420) Data frame received for 3\nI0212 14:11:27.028068    2137 log.go:172] (0xc0007e2000) (3) Data frame handling\nI0212 14:11:27.028086    2137 log.go:172] (0xc0007e2000) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0212 14:11:27.028238    2137 log.go:172] (0xc0007be000) (5) Data frame sent\nI0212 14:11:27.102337    2137 log.go:172] (0xc0007e8420) Data frame received for 1\nI0212 14:11:27.102476    2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Stream removed, broadcasting: 3\nI0212 14:11:27.102612    2137 log.go:172] (0xc0002f8820) (1) Data frame handling\nI0212 14:11:27.102632    2137 log.go:172] (0xc0002f8820) (1) Data frame sent\nI0212 14:11:27.102691    2137 log.go:172] (0xc0007e8420) (0xc0007be000) Stream removed, broadcasting: 5\nI0212 14:11:27.102711    2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Stream removed, broadcasting: 1\nI0212 14:11:27.103098    2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Stream removed, broadcasting: 1\nI0212 14:11:27.103161    2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Stream removed, broadcasting: 3\nI0212 14:11:27.103174    2137 log.go:172] (0xc0007e8420) (0xc0007be000) Stream removed, broadcasting: 5\n"
Feb 12 14:11:27.112: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 14:11:27.112: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 14:11:27.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:11:27.737: INFO: stderr: "I0212 14:11:27.469816    2156 log.go:172] (0xc000a46370) (0xc000a0e640) Create stream\nI0212 14:11:27.470166    2156 log.go:172] (0xc000a46370) (0xc000a0e640) Stream added, broadcasting: 1\nI0212 14:11:27.481307    2156 log.go:172] (0xc000a46370) Reply frame received for 1\nI0212 14:11:27.481456    2156 log.go:172] (0xc000a46370) (0xc000924000) Create stream\nI0212 14:11:27.481476    2156 log.go:172] (0xc000a46370) (0xc000924000) Stream added, broadcasting: 3\nI0212 14:11:27.483802    2156 log.go:172] (0xc000a46370) Reply frame received for 3\nI0212 14:11:27.483859    2156 log.go:172] (0xc000a46370) (0xc0005c4320) Create stream\nI0212 14:11:27.483882    2156 log.go:172] (0xc000a46370) (0xc0005c4320) Stream added, broadcasting: 5\nI0212 14:11:27.485025    2156 log.go:172] (0xc000a46370) Reply frame received for 5\nI0212 14:11:27.606456    2156 log.go:172] (0xc000a46370) Data frame received for 5\nI0212 14:11:27.606657    2156 log.go:172] (0xc0005c4320) (5) Data frame handling\nI0212 14:11:27.606687    2156 log.go:172] (0xc0005c4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0212 14:11:27.606762    2156 log.go:172] (0xc000a46370) Data frame received for 3\nI0212 14:11:27.606803    2156 log.go:172] (0xc000924000) (3) Data frame handling\nI0212 14:11:27.606821    2156 log.go:172] (0xc000924000) (3) Data frame sent\nI0212 14:11:27.724565    2156 log.go:172] (0xc000a46370) (0xc000924000) Stream removed, broadcasting: 3\nI0212 14:11:27.724959    2156 log.go:172] (0xc000a46370) Data frame received for 1\nI0212 14:11:27.725061    2156 log.go:172] (0xc000a46370) (0xc0005c4320) Stream removed, broadcasting: 5\nI0212 14:11:27.725110    2156 log.go:172] (0xc000a0e640) (1) Data frame handling\nI0212 14:11:27.725125    2156 log.go:172] (0xc000a0e640) (1) Data frame sent\nI0212 14:11:27.725133    2156 log.go:172] (0xc000a46370) (0xc000a0e640) Stream removed, broadcasting: 1\nI0212 14:11:27.725152    2156 log.go:172] (0xc000a46370) Go away received\nI0212 14:11:27.726438    2156 log.go:172] (0xc000a46370) (0xc000a0e640) Stream removed, broadcasting: 1\nI0212 14:11:27.726461    2156 log.go:172] (0xc000a46370) (0xc000924000) Stream removed, broadcasting: 3\nI0212 14:11:27.726469    2156 log.go:172] (0xc000a46370) (0xc0005c4320) Stream removed, broadcasting: 5\n"
Feb 12 14:11:27.737: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 14:11:27.737: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 14:11:27.744: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:11:27.744: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:11:27.744: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 12 14:11:27.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 14:11:28.127: INFO: stderr: "I0212 14:11:27.879916    2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Create stream\nI0212 14:11:27.880371    2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Stream added, broadcasting: 1\nI0212 14:11:27.892678    2178 log.go:172] (0xc0006bca50) Reply frame received for 1\nI0212 14:11:27.892753    2178 log.go:172] (0xc0006bca50) (0xc000688500) Create stream\nI0212 14:11:27.892766    2178 log.go:172] (0xc0006bca50) (0xc000688500) Stream added, broadcasting: 3\nI0212 14:11:27.895696    2178 log.go:172] (0xc0006bca50) Reply frame received for 3\nI0212 14:11:27.895721    2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Create stream\nI0212 14:11:27.895727    2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Stream added, broadcasting: 5\nI0212 14:11:27.897144    2178 log.go:172] (0xc0006bca50) Reply frame received for 5\nI0212 14:11:27.998581    2178 log.go:172] (0xc0006bca50) Data frame received for 3\nI0212 14:11:27.998847    2178 log.go:172] (0xc000688500) (3) Data frame handling\nI0212 14:11:27.998871    2178 log.go:172] (0xc000688500) (3) Data frame sent\nI0212 14:11:27.998935    2178 log.go:172] (0xc0006bca50) Data frame received for 5\nI0212 14:11:27.998944    2178 log.go:172] (0xc0003ca780) (5) Data frame handling\nI0212 14:11:27.998957    2178 log.go:172] (0xc0003ca780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:28.115343    2178 log.go:172] (0xc0006bca50) Data frame received for 1\nI0212 14:11:28.115543    2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Stream removed, broadcasting: 5\nI0212 14:11:28.115640    2178 log.go:172] (0xc0003ca6e0) (1) Data frame handling\nI0212 14:11:28.115670    2178 log.go:172] (0xc0003ca6e0) (1) Data frame sent\nI0212 14:11:28.115728    2178 log.go:172] (0xc0006bca50) (0xc000688500) Stream removed, broadcasting: 3\nI0212 14:11:28.115766    2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Stream removed, broadcasting: 1\nI0212 14:11:28.115813    2178 log.go:172] (0xc0006bca50) Go away received\nI0212 14:11:28.117769    2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Stream removed, broadcasting: 1\nI0212 14:11:28.117807    2178 log.go:172] (0xc0006bca50) (0xc000688500) Stream removed, broadcasting: 3\nI0212 14:11:28.117819    2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Stream removed, broadcasting: 5\n"
Feb 12 14:11:28.128: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 14:11:28.128: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 14:11:28.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 14:11:28.646: INFO: stderr: "I0212 14:11:28.306286    2197 log.go:172] (0xc0005b0580) (0xc000654a00) Create stream\nI0212 14:11:28.306800    2197 log.go:172] (0xc0005b0580) (0xc000654a00) Stream added, broadcasting: 1\nI0212 14:11:28.313764    2197 log.go:172] (0xc0005b0580) Reply frame received for 1\nI0212 14:11:28.313867    2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Create stream\nI0212 14:11:28.313881    2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Stream added, broadcasting: 3\nI0212 14:11:28.316120    2197 log.go:172] (0xc0005b0580) Reply frame received for 3\nI0212 14:11:28.316157    2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Create stream\nI0212 14:11:28.316171    2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Stream added, broadcasting: 5\nI0212 14:11:28.317344    2197 log.go:172] (0xc0005b0580) Reply frame received for 5\nI0212 14:11:28.406991    2197 log.go:172] (0xc0005b0580) Data frame received for 5\nI0212 14:11:28.407130    2197 log.go:172] (0xc0006a2000) (5) Data frame handling\nI0212 14:11:28.407172    2197 log.go:172] (0xc0006a2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:28.434172    2197 log.go:172] (0xc0005b0580) Data frame received for 3\nI0212 14:11:28.434286    2197 log.go:172] (0xc000654aa0) (3) Data frame handling\nI0212 14:11:28.434312    2197 log.go:172] (0xc000654aa0) (3) Data frame sent\nI0212 14:11:28.615167    2197 log.go:172] (0xc0005b0580) Data frame received for 1\nI0212 14:11:28.615518    2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Stream removed, broadcasting: 3\nI0212 14:11:28.615637    2197 log.go:172] (0xc000654a00) (1) Data frame handling\nI0212 14:11:28.615721    2197 log.go:172] (0xc000654a00) (1) Data frame sent\nI0212 14:11:28.615742    2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Stream removed, broadcasting: 5\nI0212 14:11:28.615802    2197 log.go:172] (0xc0005b0580) (0xc000654a00) Stream removed, broadcasting: 1\nI0212 14:11:28.615842    2197 log.go:172] (0xc0005b0580) Go away received\nI0212 14:11:28.617494    2197 log.go:172] (0xc0005b0580) (0xc000654a00) Stream removed, broadcasting: 1\nI0212 14:11:28.617620    2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Stream removed, broadcasting: 3\nI0212 14:11:28.617637    2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Stream removed, broadcasting: 5\n"
Feb 12 14:11:28.646: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 14:11:28.646: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 14:11:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 14:11:29.185: INFO: stderr: "I0212 14:11:28.848864    2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Create stream\nI0212 14:11:28.849456    2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Stream added, broadcasting: 1\nI0212 14:11:28.861237    2217 log.go:172] (0xc0009562c0) Reply frame received for 1\nI0212 14:11:28.861352    2217 log.go:172] (0xc0009562c0) (0xc00063c320) Create stream\nI0212 14:11:28.861366    2217 log.go:172] (0xc0009562c0) (0xc00063c320) Stream added, broadcasting: 3\nI0212 14:11:28.864062    2217 log.go:172] (0xc0009562c0) Reply frame received for 3\nI0212 14:11:28.864186    2217 log.go:172] (0xc0009562c0) (0xc000820780) Create stream\nI0212 14:11:28.864267    2217 log.go:172] (0xc0009562c0) (0xc000820780) Stream added, broadcasting: 5\nI0212 14:11:28.865823    2217 log.go:172] (0xc0009562c0) Reply frame received for 5\nI0212 14:11:28.995863    2217 log.go:172] (0xc0009562c0) Data frame received for 5\nI0212 14:11:28.995956    2217 log.go:172] (0xc000820780) (5) Data frame handling\nI0212 14:11:28.995975    2217 log.go:172] (0xc000820780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:29.026452    2217 log.go:172] (0xc0009562c0) Data frame received for 3\nI0212 14:11:29.026478    2217 log.go:172] (0xc00063c320) (3) Data frame handling\nI0212 14:11:29.026497    2217 log.go:172] (0xc00063c320) (3) Data frame sent\nI0212 14:11:29.169733    2217 log.go:172] (0xc0009562c0) Data frame received for 1\nI0212 14:11:29.170360    2217 log.go:172] (0xc0009562c0) (0xc00063c320) Stream removed, broadcasting: 3\nI0212 14:11:29.170455    2217 log.go:172] (0xc0008206e0) (1) Data frame handling\nI0212 14:11:29.170604    2217 log.go:172] (0xc0008206e0) (1) Data frame sent\nI0212 14:11:29.170688    2217 log.go:172] (0xc0009562c0) (0xc000820780) Stream removed, broadcasting: 5\nI0212 14:11:29.170749    2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Stream removed, broadcasting: 1\nI0212 14:11:29.170798    2217 log.go:172] (0xc0009562c0) Go away received\nI0212 14:11:29.172456    2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Stream removed, broadcasting: 1\nI0212 14:11:29.172487    2217 log.go:172] (0xc0009562c0) (0xc00063c320) Stream removed, broadcasting: 3\nI0212 14:11:29.172501    2217 log.go:172] (0xc0009562c0) (0xc000820780) Stream removed, broadcasting: 5\n"
Feb 12 14:11:29.186: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 14:11:29.186: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 14:11:29.186: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 14:11:29.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 14:11:29.245: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 14:11:29.245: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 14:11:29.261: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:29.261: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:29.261: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:29.261: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:29.261: INFO: 
Feb 12 14:11:29.261: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:31.481: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:31.481: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:31.481: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:31.481: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:31.481: INFO: 
Feb 12 14:11:31.481: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:32.494: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:32.494: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:32.494: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:32.494: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:32.494: INFO: 
Feb 12 14:11:32.494: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:34.693: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:34.693: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:34.693: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:34.693: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:34.693: INFO: 
Feb 12 14:11:34.693: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:35.702: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:35.702: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:35.702: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:35.702: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:35.702: INFO: 
Feb 12 14:11:35.702: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:36.716: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:36.716: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:36.716: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:36.716: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:36.716: INFO: 
Feb 12 14:11:36.716: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:37.725: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:37.725: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:37.725: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:37.726: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:37.726: INFO: 
Feb 12 14:11:37.726: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 14:11:38.741: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 12 14:11:38.741: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC  }]
Feb 12 14:11:38.741: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:38.741: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC  }]
Feb 12 14:11:38.741: INFO: 
Feb 12 14:11:38.741: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-961
Feb 12 14:11:39.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:11:39.999: INFO: rc: 1
Feb 12 14:11:40.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0027e3a70 exit status 1   true [0xc002b682a8 0xc002b682c0 0xc002b682d8] [0xc002b682a8 0xc002b682c0 0xc002b682d8] [0xc002b682b8 0xc002b682d0] [0xba6c50 0xba6c50] 0xc001a58420 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 12 14:11:50.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:11:50.186: INFO: rc: 1
Feb 12 14:11:50.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba090 exit status 1   true [0xc00035cef8 0xc00035d040 0xc00035d2b0] [0xc00035cef8 0xc00035d040 0xc00035d2b0] [0xc00035cf30 0xc00035d1b0] [0xba6c50 0xba6c50] 0xc002cae8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:12:00.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:12:00.385: INFO: rc: 1
Feb 12 14:12:00.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba150 exit status 1   true [0xc00035d2f8 0xc00035d8d8 0xc00035d930] [0xc00035d2f8 0xc00035d8d8 0xc00035d930] [0xc00035d458 0xc00035d918] [0xba6c50 0xba6c50] 0xc002caf920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:12:10.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:12:10.674: INFO: rc: 1
Feb 12 14:12:10.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007275c0 exit status 1   true [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c438 0xc00084c580] [0xba6c50 0xba6c50] 0xc002b72720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:12:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:12:20.785: INFO: rc: 1
Feb 12 14:12:20.785: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007276b0 exit status 1   true [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c988 0xc00084cb38] [0xba6c50 0xba6c50] 0xc002b72a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:12:30.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:12:30.959: INFO: rc: 1
Feb 12 14:12:30.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000da4120 exit status 1   true [0xc000186000 0xc002744028 0xc002744058] [0xc000186000 0xc002744028 0xc002744058] [0xc002744010 0xc002744048] [0xba6c50 0xba6c50] 0xc002d0a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:12:40.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:12:41.092: INFO: rc: 1
Feb 12 14:12:41.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000da4210 exit status 1   true [0xc002744068 0xc0027440a8 0xc002744108] [0xc002744068 0xc0027440a8 0xc002744108] [0xc002744090 0xc0027440e0] [0xba6c50 0xba6c50] 0xc002d0a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:12:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:12:51.246: INFO: rc: 1
Feb 12 14:12:51.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba240 exit status 1   true [0xc00035d970 0xc00035d9f0 0xc00035da90] [0xc00035d970 0xc00035d9f0 0xc00035da90] [0xc00035d9d0 0xc00035da80] [0xba6c50 0xba6c50] 0xc002cafc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:13:01.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:13:01.423: INFO: rc: 1
Feb 12 14:13:01.423: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007277a0 exit status 1   true [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cc68 0xc00084cf30] [0xba6c50 0xba6c50] 0xc002b72d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:13:11.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:13:11.623: INFO: rc: 1
Feb 12 14:13:11.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba300 exit status 1   true [0xc00035daa8 0xc00035dbe8 0xc00035dcb8] [0xc00035daa8 0xc00035dbe8 0xc00035dcb8] [0xc00035db80 0xc00035dc98] [0xba6c50 0xba6c50] 0xc002caff80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:13:21.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:13:21.808: INFO: rc: 1
Feb 12 14:13:21.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b160c0 exit status 1   true [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6048 0xc001cb6088] [0xba6c50 0xba6c50] 0xc002c0e300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:13:31.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:13:31.998: INFO: rc: 1
Feb 12 14:13:31.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b161b0 exit status 1   true [0xc001cb6188 0xc001cb6268 0xc001cb63a0] [0xc001cb6188 0xc001cb6268 0xc001cb63a0] [0xc001cb6258 0xc001cb6330] [0xba6c50 0xba6c50] 0xc002c0e780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:13:41.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:13:42.211: INFO: rc: 1
Feb 12 14:13:42.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba3f0 exit status 1   true [0xc00035dcf8 0xc00035ddc0 0xc00035dec8] [0xc00035dcf8 0xc00035ddc0 0xc00035dec8] [0xc00035dd58 0xc00035de60] [0xba6c50 0xba6c50] 0xc001d56300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:13:52.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:13:52.465: INFO: rc: 1
Feb 12 14:13:52.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007275f0 exit status 1   true [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c438 0xc00084c580] [0xba6c50 0xba6c50] 0xc002cae8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:14:02.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:14:02.599: INFO: rc: 1
Feb 12 14:14:02.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000727710 exit status 1   true [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c988 0xc00084cb38] [0xba6c50 0xba6c50] 0xc002caf920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:14:12.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:14:12.801: INFO: rc: 1
Feb 12 14:14:12.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000727800 exit status 1   true [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cc68 0xc00084cf30] [0xba6c50 0xba6c50] 0xc002cafc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:14:22.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:14:22.995: INFO: rc: 1
Feb 12 14:14:22.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000da4090 exit status 1   true [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6048 0xc001cb6088] [0xba6c50 0xba6c50] 0xc002b72720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:14:32.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:14:33.131: INFO: rc: 1
Feb 12 14:14:33.131: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007278f0 exit status 1   true [0xc00084cf68 0xc00084d160 0xc00084d3e8] [0xc00084cf68 0xc00084d160 0xc00084d3e8] [0xc00084d100 0xc00084d348] [0xba6c50 0xba6c50] 0xc002caff80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:14:43.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:14:43.351: INFO: rc: 1
Feb 12 14:14:43.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba0c0 exit status 1   true [0xc002744000 0xc002744038 0xc002744068] [0xc002744000 0xc002744038 0xc002744068] [0xc002744028 0xc002744058] [0xba6c50 0xba6c50] 0xc002d0a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:14:53.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:14:53.492: INFO: rc: 1
Feb 12 14:14:53.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba1e0 exit status 1   true [0xc002744080 0xc0027440b8 0xc002744110] [0xc002744080 0xc0027440b8 0xc002744110] [0xc0027440a8 0xc002744108] [0xba6c50 0xba6c50] 0xc002d0a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:15:03.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:15:03.702: INFO: rc: 1
Feb 12 14:15:03.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000727a40 exit status 1   true [0xc00084d440 0xc00084d620 0xc00084d680] [0xc00084d440 0xc00084d620 0xc00084d680] [0xc00084d5e0 0xc00084d670] [0xba6c50 0xba6c50] 0xc002c0e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:15:13.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:15:13.871: INFO: rc: 1
Feb 12 14:15:13.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000727b30 exit status 1   true [0xc00084d6e0 0xc00084d808 0xc00084da38] [0xc00084d6e0 0xc00084d808 0xc00084da38] [0xc00084d7b0 0xc00084d9c8] [0xba6c50 0xba6c50] 0xc002c0e7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:15:23.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:15:24.003: INFO: rc: 1
Feb 12 14:15:24.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba2d0 exit status 1   true [0xc002744118 0xc002744130 0xc002744148] [0xc002744118 0xc002744130 0xc002744148] [0xc002744128 0xc002744140] [0xba6c50 0xba6c50] 0xc002d0aae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:15:34.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:15:34.154: INFO: rc: 1
Feb 12 14:15:34.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010ba3c0 exit status 1   true [0xc002744150 0xc002744168 0xc002744180] [0xc002744150 0xc002744168 0xc002744180] [0xc002744160 0xc002744178] [0xba6c50 0xba6c50] 0xc002d0afc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:15:44.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:15:44.272: INFO: rc: 1
Feb 12 14:15:44.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000727c50 exit status 1   true [0xc00084dac8 0xc00084db90 0xc00084dc88] [0xc00084dac8 0xc00084db90 0xc00084dc88] [0xc00084db78 0xc00084dc08] [0xba6c50 0xba6c50] 0xc002c0ecc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:15:54.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:15:54.435: INFO: rc: 1
Feb 12 14:15:54.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007275c0 exit status 1   true [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c438 0xc00084c580] [0xba6c50 0xba6c50] 0xc002c0e300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:16:04.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:16:04.600: INFO: rc: 1
Feb 12 14:16:04.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007276e0 exit status 1   true [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c988 0xc00084cb38] [0xba6c50 0xba6c50] 0xc002c0e780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:16:14.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:16:14.783: INFO: rc: 1
Feb 12 14:16:14.784: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000da40f0 exit status 1   true [0xc002744000 0xc002744038 0xc002744068] [0xc002744000 0xc002744038 0xc002744068] [0xc002744028 0xc002744058] [0xba6c50 0xba6c50] 0xc002d0a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:16:24.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:16:24.933: INFO: rc: 1
Feb 12 14:16:24.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007277d0 exit status 1   true [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cc68 0xc00084cf30] [0xba6c50 0xba6c50] 0xc002c0ec60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:16:34.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:16:35.762: INFO: rc: 1
Feb 12 14:16:35.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b16090 exit status 1   true [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6048 0xc001cb6088] [0xba6c50 0xba6c50] 0xc002cae8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 12 14:16:45.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:16:45.952: INFO: rc: 1
Feb 12 14:16:45.952: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 12 14:16:45.952: INFO: Scaling statefulset ss to 0
Feb 12 14:16:45.966: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 12 14:16:45.969: INFO: Deleting all statefulset in ns statefulset-961
Feb 12 14:16:45.971: INFO: Scaling statefulset ss to 0
Feb 12 14:16:45.980: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 14:16:45.982: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:16:45.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-961" for this suite.
Feb 12 14:16:54.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:16:54.126: INFO: namespace statefulset-961 deletion completed in 8.121140445s

• [SLOW TEST:372.903 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:16:54.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:17:04.348: INFO: Waiting up to 5m0s for pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc" in namespace "pods-5717" to be "success or failure"
Feb 12 14:17:04.357: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289749ms
Feb 12 14:17:06.371: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022996868s
Feb 12 14:17:08.380: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031776028s
Feb 12 14:17:10.388: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039508543s
Feb 12 14:17:12.407: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058292921s
Feb 12 14:17:14.413: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06521396s
STEP: Saw pod success
Feb 12 14:17:14.414: INFO: Pod "client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc" satisfied condition "success or failure"
Feb 12 14:17:14.418: INFO: Trying to get logs from node iruya-node pod client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc container env3cont: 
STEP: delete the pod
Feb 12 14:17:14.686: INFO: Waiting for pod client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc to disappear
Feb 12 14:17:14.693: INFO: Pod client-envvars-3c33c93a-675e-48ab-8fd5-0913b1b45cbc no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:17:14.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5717" for this suite.
Feb 12 14:17:56.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:17:56.918: INFO: namespace pods-5717 deletion completed in 42.219079453s

• [SLOW TEST:62.790 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:17:56.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 12 14:18:07.568: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:18:07.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8988" for this suite.
Feb 12 14:18:13.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:18:13.798: INFO: namespace container-runtime-8988 deletion completed in 6.191562157s

• [SLOW TEST:16.880 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:18:13.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 14:18:13.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5779'
Feb 12 14:18:14.028: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 14:18:14.028: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 12 14:18:18.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5779'
Feb 12 14:18:18.233: INFO: stderr: ""
Feb 12 14:18:18.233: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:18:18.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5779" for this suite.
Feb 12 14:18:42.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:18:42.419: INFO: namespace kubectl-5779 deletion completed in 24.176612494s

• [SLOW TEST:28.620 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:18:42.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 14:18:42.533: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c" in namespace "projected-2298" to be "success or failure"
Feb 12 14:18:42.559: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.04414ms
Feb 12 14:18:44.566: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032232875s
Feb 12 14:18:46.587: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053239267s
Feb 12 14:18:48.600: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066938149s
Feb 12 14:18:50.618: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084882992s
Feb 12 14:18:52.628: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09439362s
STEP: Saw pod success
Feb 12 14:18:52.628: INFO: Pod "downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c" satisfied condition "success or failure"
Feb 12 14:18:52.631: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c container client-container: 
STEP: delete the pod
Feb 12 14:18:52.857: INFO: Waiting for pod downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c to disappear
Feb 12 14:18:52.925: INFO: Pod downwardapi-volume-18057436-5a34-44d7-bbca-0e43a8aca02c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:18:52.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2298" for this suite.
Feb 12 14:18:58.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:18:59.162: INFO: namespace projected-2298 deletion completed in 6.225286952s

• [SLOW TEST:16.742 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:18:59.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:18:59.305: INFO: Creating deployment "nginx-deployment"
Feb 12 14:18:59.313: INFO: Waiting for observed generation 1
Feb 12 14:19:02.265: INFO: Waiting for all required pods to come up
Feb 12 14:19:03.392: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 12 14:19:30.601: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 12 14:19:30.610: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 12 14:19:30.625: INFO: Updating deployment nginx-deployment
Feb 12 14:19:30.625: INFO: Waiting for observed generation 2
Feb 12 14:19:33.390: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 12 14:19:34.222: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 12 14:19:34.641: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 12 14:19:34.691: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 12 14:19:34.692: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 12 14:19:34.694: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 12 14:19:34.698: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 12 14:19:34.698: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 12 14:19:34.705: INFO: Updating deployment nginx-deployment
Feb 12 14:19:34.705: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 12 14:19:36.179: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 12 14:19:39.079: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 12 14:19:41.780: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9200,SelfLink:/apis/apps/v1/namespaces/deployment-9200/deployments/nginx-deployment,UID:0db395ae-90e7-490f-87ac-8e9e2d07018b,ResourceVersion:24081557,Generation:3,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-12 14:19:36 +0000 UTC 2020-02-12 14:19:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-12 14:19:39 +0000 UTC 2020-02-12 14:18:59 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 12 14:19:42.849: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9200,SelfLink:/apis/apps/v1/namespaces/deployment-9200/replicasets/nginx-deployment-55fb7cb77f,UID:41be85e0-e21f-4ef6-b7b8-4d37d28363f8,ResourceVersion:24081551,Generation:3,CreationTimestamp:2020-02-12 14:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0db395ae-90e7-490f-87ac-8e9e2d07018b 0xc002f82937 0xc002f82938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 14:19:42.849: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 12 14:19:42.849: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9200,SelfLink:/apis/apps/v1/namespaces/deployment-9200/replicasets/nginx-deployment-7b8c6f4498,UID:8cef5aa9-87e9-457b-a90a-fec74cc48b68,ResourceVersion:24081549,Generation:3,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0db395ae-90e7-490f-87ac-8e9e2d07018b 0xc002f82a07 0xc002f82a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 12 14:19:43.662: INFO: Pod "nginx-deployment-55fb7cb77f-4wv6c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4wv6c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-4wv6c,UID:1dd5dee5-79b0-4e6e-ba12-41e7c9229bc4,ResourceVersion:24081461,Generation:0,CreationTimestamp:2020-02-12 14:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c72e77 0xc002c72e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c72ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c72f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-12 14:19:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.662: INFO: Pod "nginx-deployment-55fb7cb77f-775wx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-775wx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-775wx,UID:e0127259-df03-4300-ae5c-57d5c9f225f3,ResourceVersion:24081539,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c72fd7 0xc002c72fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.663: INFO: Pod "nginx-deployment-55fb7cb77f-bfbzt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bfbzt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-bfbzt,UID:2364c180-c668-4f9d-a444-81ac153a7e4b,ResourceVersion:24081484,Generation:0,CreationTimestamp:2020-02-12 14:19:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c730e7 0xc002c730e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-12 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.663: INFO: Pod "nginx-deployment-55fb7cb77f-c6v4c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c6v4c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-c6v4c,UID:48f966c6-b9e5-4d2c-8e4c-581a500005c2,ResourceVersion:24081540,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73247 0xc002c73248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c732c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c732e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.663: INFO: Pod "nginx-deployment-55fb7cb77f-cbpvg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cbpvg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-cbpvg,UID:af782bfd-8f28-4b53-8864-871e4a2d728d,ResourceVersion:24081471,Generation:0,CreationTimestamp:2020-02-12 14:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73367 0xc002c73368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c733e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-12 14:19:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.663: INFO: Pod "nginx-deployment-55fb7cb77f-cmf7t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cmf7t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-cmf7t,UID:4a1ebbc4-2082-440c-bb6f-fd5961b07ff8,ResourceVersion:24081559,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c734d7 0xc002c734d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-12 14:19:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-jwc6b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jwc6b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-jwc6b,UID:0bf61bef-692b-49d6-90e1-13dfe216312f,ResourceVersion:24081547,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73637 0xc002c73638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c736a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c736c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-12 14:19:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-lq2jj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lq2jj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-lq2jj,UID:fe8eae91-c9f3-4bde-aa71-3c8877e5a4eb,ResourceVersion:24081541,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73797 0xc002c73798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-mh6kq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mh6kq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-mh6kq,UID:96ca597b-22da-49e5-be2e-b2733c75ca48,ResourceVersion:24081535,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c738b7 0xc002c738b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-qh2t9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qh2t9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-qh2t9,UID:15fce433-0244-4477-9c88-a514673cdfe8,ResourceVersion:24081510,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c739c7 0xc002c739c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-xj92s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xj92s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-xj92s,UID:4ab5bc4e-dd27-4ec7-9aed-443e23b8c313,ResourceVersion:24081464,Generation:0,CreationTimestamp:2020-02-12 14:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73ae7 0xc002c73ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73b60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-12 14:19:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-zhzlg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zhzlg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-zhzlg,UID:9926d21b-5213-409b-9e40-555d6809e21a,ResourceVersion:24081487,Generation:0,CreationTimestamp:2020-02-12 14:19:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73c57 0xc002c73c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-12 14:19:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.664: INFO: Pod "nginx-deployment-55fb7cb77f-zpll8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zpll8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-55fb7cb77f-zpll8,UID:8ced5ccc-974c-478c-8484-cf586857f161,ResourceVersion:24081533,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 41be85e0-e21f-4ef6-b7b8-4d37d28363f8 0xc002c73dc7 0xc002c73dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-4t5jt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4t5jt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-4t5jt,UID:42fd1c67-dac5-4917-a097-15715fb748d4,ResourceVersion:24081538,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002c73ee7 0xc002c73ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c73f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c73f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-5jhrr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5jhrr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-5jhrr,UID:84961fd9-027a-44e1-9ce7-0fd89728575e,ResourceVersion:24081563,Generation:0,CreationTimestamp:2020-02-12 14:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002c73ff7 0xc002c73ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-12 14:19:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-6v8qm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6v8qm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-6v8qm,UID:9650e6c5-ff88-4826-a0fd-2c2f94f560d4,ResourceVersion:24081425,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e157 0xc002d8e158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e1d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://77f80670fff4d7cc4db7725e6039855178dc54eec520d2e5023adb7e55e0d1d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-6wqr6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6wqr6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-6wqr6,UID:5c115aae-eb6f-4716-92a7-349becbce0da,ResourceVersion:24081398,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e2c7 0xc002d8e2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-12 14:19:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3d160afb2aeaa95044459fea1edb59f0c3045ac1d73c04d5279f0b4c437efe11}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-c6tb4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c6tb4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-c6tb4,UID:9dc2eec1-2736-4c53-b2a4-675a71297ae8,ResourceVersion:24081394,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e427 0xc002d8e428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3634623941b20b24a30681d6edfc409416e6982937d5a32e9e9829c3c7a459fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-cgmnk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cgmnk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-cgmnk,UID:a09d99bc-3387-4273-b96b-36e9beede38e,ResourceVersion:24081428,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e587 0xc002d8e588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://91904a5320bf062317b5e34b6f3c4e95138c4e523a54687367da8b5f08a4e098}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.665: INFO: Pod "nginx-deployment-7b8c6f4498-dsnjc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dsnjc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-dsnjc,UID:8918ea3e-b0dc-4496-8058-2993b60c3829,ResourceVersion:24081407,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e6f7 0xc002d8e6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0df61dbdf12cc2a62742efdaf36d358f51b51b8c0116111aa7406a3690e07bbd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.666: INFO: Pod "nginx-deployment-7b8c6f4498-gf7js" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gf7js,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-gf7js,UID:344a7834-81ef-43c3-a43a-2c1bef1a231f,ResourceVersion:24081518,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e857 0xc002d8e858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8e8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.666: INFO: Pod "nginx-deployment-7b8c6f4498-gkbtb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gkbtb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-gkbtb,UID:602e9e6a-e022-413b-97d1-7d682c93d99a,ResourceVersion:24081515,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8e967 0xc002d8e968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8e9e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8ea00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.666: INFO: Pod "nginx-deployment-7b8c6f4498-jw2zv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jw2zv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-jw2zv,UID:5075b382-b9b3-4025-9ae3-6fb3305e6cc8,ResourceVersion:24081410,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8ea87 0xc002d8ea88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8eb00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8eb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://916ddc198971f43862c0df2b995e2043efaadee4456a60283ac966d4f933833b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.666: INFO: Pod "nginx-deployment-7b8c6f4498-k6f95" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k6f95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-k6f95,UID:26591a46-43e7-4d1e-b3eb-e35c58b9f2f0,ResourceVersion:24081516,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8ebf7 0xc002d8ebf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8ec70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8ec90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.667: INFO: Pod "nginx-deployment-7b8c6f4498-kzp79" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kzp79,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-kzp79,UID:4fa8aab5-4501-44d4-adc3-a04d4cd288ba,ResourceVersion:24081531,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8ed17 0xc002d8ed18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8ed90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8edb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.667: INFO: Pod "nginx-deployment-7b8c6f4498-pd99b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pd99b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-pd99b,UID:1efd4c38-2b49-479c-adc8-1faa3c4ea2e2,ResourceVersion:24081537,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8ee37 0xc002d8ee38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8eeb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8eed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.667: INFO: Pod "nginx-deployment-7b8c6f4498-qsqgm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qsqgm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-qsqgm,UID:b5c2ff1e-a130-4ef6-b1a3-43b827d2cbf1,ResourceVersion:24081517,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8ef57 0xc002d8ef58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8efc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8efe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.667: INFO: Pod "nginx-deployment-7b8c6f4498-r9m77" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r9m77,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-r9m77,UID:1e827845-4b47-42b5-96ba-d1776f44485b,ResourceVersion:24081509,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8f067 0xc002d8f068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8f0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8f100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.667: INFO: Pod "nginx-deployment-7b8c6f4498-rfwph" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rfwph,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-rfwph,UID:0f4c32be-29d4-48a4-8ba6-2f0ffa6e81e2,ResourceVersion:24081536,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8f187 0xc002d8f188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8f200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8f220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.667: INFO: Pod "nginx-deployment-7b8c6f4498-sstfv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sstfv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-sstfv,UID:e2097b21-5ad8-4e82-bc7e-809ffce96b8b,ResourceVersion:24081571,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8f2a7 0xc002d8f2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8f320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8f340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-12 14:19:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.668: INFO: Pod "nginx-deployment-7b8c6f4498-tnmxp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tnmxp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-tnmxp,UID:bfe0d8f9-b2e2-4386-a2bb-1ccebda7c23a,ResourceVersion:24081412,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8f407 0xc002d8f408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8f470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8f490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bbefb69a7bfd23d0b63b897f5515bb6d528adf1adff154481ef2520259ac039b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.668: INFO: Pod "nginx-deployment-7b8c6f4498-x6z8c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x6z8c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-x6z8c,UID:74c0b153-d322-4f79-a843-0cda347c74d0,ResourceVersion:24081532,Generation:0,CreationTimestamp:2020-02-12 14:19:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8f567 0xc002d8f568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8f5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8f5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 14:19:43.668: INFO: Pod "nginx-deployment-7b8c6f4498-zt67t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zt67t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9200,SelfLink:/api/v1/namespaces/deployment-9200/pods/nginx-deployment-7b8c6f4498-zt67t,UID:a4ee4e4e-b3d5-498d-a327-4c93e7ebc4ca,ResourceVersion:24081415,Generation:0,CreationTimestamp:2020-02-12 14:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8cef5aa9-87e9-457b-a90a-fec74cc48b68 0xc002d8f677 0xc002d8f678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ww8x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ww8x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ww8x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d8f6e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d8f700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:19:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-12 14:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 14:19:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2f1ea71fb4fee7b7c0bd56d9179e6af9a2e89e0dfc01f578eaec9a4483092796}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:19:43.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9200" for this suite.
Feb 12 14:20:39.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:20:40.080: INFO: namespace deployment-9200 deletion completed in 56.040974591s

• [SLOW TEST:100.919 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:20:40.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 14:20:40.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d" in namespace "projected-3080" to be "success or failure"
Feb 12 14:20:40.244: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.986581ms
Feb 12 14:20:42.250: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019305774s
Feb 12 14:20:44.266: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034739608s
Feb 12 14:20:46.278: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046815563s
Feb 12 14:20:48.286: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055537628s
Feb 12 14:20:50.314: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.083390282s
Feb 12 14:20:52.382: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.151091624s
Feb 12 14:20:54.397: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.165820338s
Feb 12 14:20:56.414: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.183054813s
Feb 12 14:20:58.561: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.330149449s
STEP: Saw pod success
Feb 12 14:20:58.561: INFO: Pod "downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d" satisfied condition "success or failure"
Feb 12 14:20:58.568: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d container client-container: 
STEP: delete the pod
Feb 12 14:20:58.725: INFO: Waiting for pod downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d to disappear
Feb 12 14:20:58.732: INFO: Pod downwardapi-volume-e9f18165-e09e-41ce-941e-b5af99482e6d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:20:58.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3080" for this suite.
Feb 12 14:21:04.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:21:04.862: INFO: namespace projected-3080 deletion completed in 6.124054645s

• [SLOW TEST:24.781 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:21:04.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 12 14:21:05.004: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 12 14:21:10.018: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:21:11.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2552" for this suite.
Feb 12 14:21:17.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:21:17.335: INFO: namespace replication-controller-2552 deletion completed in 6.249098366s

• [SLOW TEST:12.473 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:21:17.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 12 14:21:17.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2673'
Feb 12 14:21:20.734: INFO: stderr: ""
Feb 12 14:21:20.734: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 14:21:20.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2673'
Feb 12 14:21:20.876: INFO: stderr: ""
Feb 12 14:21:20.876: INFO: stdout: "update-demo-nautilus-jw97k update-demo-nautilus-xflll "
Feb 12 14:21:20.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jw97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:21.122: INFO: stderr: ""
Feb 12 14:21:21.122: INFO: stdout: ""
Feb 12 14:21:21.122: INFO: update-demo-nautilus-jw97k is created but not running
Feb 12 14:21:26.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2673'
Feb 12 14:21:26.302: INFO: stderr: ""
Feb 12 14:21:26.302: INFO: stdout: "update-demo-nautilus-jw97k update-demo-nautilus-xflll "
Feb 12 14:21:26.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jw97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:26.538: INFO: stderr: ""
Feb 12 14:21:26.538: INFO: stdout: ""
Feb 12 14:21:26.538: INFO: update-demo-nautilus-jw97k is created but not running
Feb 12 14:21:31.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2673'
Feb 12 14:21:31.671: INFO: stderr: ""
Feb 12 14:21:31.671: INFO: stdout: "update-demo-nautilus-jw97k update-demo-nautilus-xflll "
Feb 12 14:21:31.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jw97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:31.761: INFO: stderr: ""
Feb 12 14:21:31.761: INFO: stdout: "true"
Feb 12 14:21:31.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jw97k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:31.919: INFO: stderr: ""
Feb 12 14:21:31.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 14:21:31.919: INFO: validating pod update-demo-nautilus-jw97k
Feb 12 14:21:31.926: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 14:21:31.926: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 14:21:31.926: INFO: update-demo-nautilus-jw97k is verified up and running
Feb 12 14:21:31.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xflll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:32.012: INFO: stderr: ""
Feb 12 14:21:32.012: INFO: stdout: ""
Feb 12 14:21:32.012: INFO: update-demo-nautilus-xflll is created but not running
Feb 12 14:21:37.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2673'
Feb 12 14:21:37.126: INFO: stderr: ""
Feb 12 14:21:37.126: INFO: stdout: "update-demo-nautilus-jw97k update-demo-nautilus-xflll "
Feb 12 14:21:37.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jw97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:37.265: INFO: stderr: ""
Feb 12 14:21:37.265: INFO: stdout: "true"
Feb 12 14:21:37.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jw97k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:37.377: INFO: stderr: ""
Feb 12 14:21:37.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 14:21:37.377: INFO: validating pod update-demo-nautilus-jw97k
Feb 12 14:21:37.386: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 14:21:37.386: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 14:21:37.386: INFO: update-demo-nautilus-jw97k is verified up and running
Feb 12 14:21:37.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xflll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:37.534: INFO: stderr: ""
Feb 12 14:21:37.534: INFO: stdout: "true"
Feb 12 14:21:37.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xflll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:21:37.677: INFO: stderr: ""
Feb 12 14:21:37.677: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 14:21:37.677: INFO: validating pod update-demo-nautilus-xflll
Feb 12 14:21:37.703: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 14:21:37.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 14:21:37.703: INFO: update-demo-nautilus-xflll is verified up and running
STEP: rolling-update to new replication controller
Feb 12 14:21:37.705: INFO: scanned /root for discovery docs: 
Feb 12 14:21:37.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2673'
Feb 12 14:22:13.189: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 12 14:22:13.190: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 14:22:13.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2673'
Feb 12 14:22:13.465: INFO: stderr: ""
Feb 12 14:22:13.465: INFO: stdout: "update-demo-kitten-d5nlj update-demo-kitten-rgqtg "
Feb 12 14:22:13.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d5nlj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:22:13.698: INFO: stderr: ""
Feb 12 14:22:13.699: INFO: stdout: "true"
Feb 12 14:22:13.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d5nlj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:22:13.929: INFO: stderr: ""
Feb 12 14:22:13.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 12 14:22:13.930: INFO: validating pod update-demo-kitten-d5nlj
Feb 12 14:22:14.083: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 12 14:22:14.084: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 12 14:22:14.084: INFO: update-demo-kitten-d5nlj is verified up and running
Feb 12 14:22:14.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rgqtg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:22:14.206: INFO: stderr: ""
Feb 12 14:22:14.206: INFO: stdout: "true"
Feb 12 14:22:14.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rgqtg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2673'
Feb 12 14:22:14.322: INFO: stderr: ""
Feb 12 14:22:14.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 12 14:22:14.322: INFO: validating pod update-demo-kitten-rgqtg
Feb 12 14:22:14.346: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 12 14:22:14.346: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 12 14:22:14.347: INFO: update-demo-kitten-rgqtg is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:22:14.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2673" for this suite.
Feb 12 14:22:40.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:22:40.533: INFO: namespace kubectl-2673 deletion completed in 26.182917219s

• [SLOW TEST:83.196 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:22:40.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-g4nz
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 14:22:40.917: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-g4nz" in namespace "subpath-7172" to be "success or failure"
Feb 12 14:22:41.010: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Pending", Reason="", readiness=false. Elapsed: 93.113647ms
Feb 12 14:22:43.019: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10229567s
Feb 12 14:22:45.034: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116805773s
Feb 12 14:22:47.047: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129696396s
Feb 12 14:22:49.104: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187296251s
Feb 12 14:22:51.110: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 10.192837414s
Feb 12 14:22:53.120: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 12.202525713s
Feb 12 14:22:55.129: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 14.211670414s
Feb 12 14:22:57.147: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 16.229473082s
Feb 12 14:22:59.157: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 18.239757165s
Feb 12 14:23:01.172: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 20.255273787s
Feb 12 14:23:03.177: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 22.260208088s
Feb 12 14:23:05.182: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 24.26526506s
Feb 12 14:23:07.193: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 26.27624781s
Feb 12 14:23:09.201: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 28.284240102s
Feb 12 14:23:11.210: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Running", Reason="", readiness=true. Elapsed: 30.293221883s
Feb 12 14:23:13.219: INFO: Pod "pod-subpath-test-projected-g4nz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.301471987s
STEP: Saw pod success
Feb 12 14:23:13.219: INFO: Pod "pod-subpath-test-projected-g4nz" satisfied condition "success or failure"
Feb 12 14:23:13.226: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-g4nz container test-container-subpath-projected-g4nz: 
STEP: delete the pod
Feb 12 14:23:13.693: INFO: Waiting for pod pod-subpath-test-projected-g4nz to disappear
Feb 12 14:23:13.707: INFO: Pod pod-subpath-test-projected-g4nz no longer exists
STEP: Deleting pod pod-subpath-test-projected-g4nz
Feb 12 14:23:13.707: INFO: Deleting pod "pod-subpath-test-projected-g4nz" in namespace "subpath-7172"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:23:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7172" for this suite.
Feb 12 14:23:19.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:23:19.985: INFO: namespace subpath-7172 deletion completed in 6.267067528s

• [SLOW TEST:39.451 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:23:19.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 12 14:23:44.176: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:44.176: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:44.232128       8 log.go:172] (0xc0009da8f0) (0xc00221de00) Create stream
I0212 14:23:44.232196       8 log.go:172] (0xc0009da8f0) (0xc00221de00) Stream added, broadcasting: 1
I0212 14:23:44.243317       8 log.go:172] (0xc0009da8f0) Reply frame received for 1
I0212 14:23:44.243343       8 log.go:172] (0xc0009da8f0) (0xc0011d4fa0) Create stream
I0212 14:23:44.243349       8 log.go:172] (0xc0009da8f0) (0xc0011d4fa0) Stream added, broadcasting: 3
I0212 14:23:44.244570       8 log.go:172] (0xc0009da8f0) Reply frame received for 3
I0212 14:23:44.244587       8 log.go:172] (0xc0009da8f0) (0xc00240d540) Create stream
I0212 14:23:44.244592       8 log.go:172] (0xc0009da8f0) (0xc00240d540) Stream added, broadcasting: 5
I0212 14:23:44.245996       8 log.go:172] (0xc0009da8f0) Reply frame received for 5
I0212 14:23:44.375383       8 log.go:172] (0xc0009da8f0) Data frame received for 3
I0212 14:23:44.375463       8 log.go:172] (0xc0011d4fa0) (3) Data frame handling
I0212 14:23:44.375512       8 log.go:172] (0xc0011d4fa0) (3) Data frame sent
I0212 14:23:44.616155       8 log.go:172] (0xc0009da8f0) (0xc0011d4fa0) Stream removed, broadcasting: 3
I0212 14:23:44.616308       8 log.go:172] (0xc0009da8f0) Data frame received for 1
I0212 14:23:44.616402       8 log.go:172] (0xc0009da8f0) (0xc00240d540) Stream removed, broadcasting: 5
I0212 14:23:44.616430       8 log.go:172] (0xc00221de00) (1) Data frame handling
I0212 14:23:44.616448       8 log.go:172] (0xc00221de00) (1) Data frame sent
I0212 14:23:44.616457       8 log.go:172] (0xc0009da8f0) (0xc00221de00) Stream removed, broadcasting: 1
I0212 14:23:44.616469       8 log.go:172] (0xc0009da8f0) Go away received
I0212 14:23:44.617190       8 log.go:172] (0xc0009da8f0) (0xc00221de00) Stream removed, broadcasting: 1
I0212 14:23:44.617234       8 log.go:172] (0xc0009da8f0) (0xc0011d4fa0) Stream removed, broadcasting: 3
I0212 14:23:44.617260       8 log.go:172] (0xc0009da8f0) (0xc00240d540) Stream removed, broadcasting: 5
Feb 12 14:23:44.617: INFO: Exec stderr: ""
Feb 12 14:23:44.617: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:44.617: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:44.732280       8 log.go:172] (0xc001b46b00) (0xc0033f3720) Create stream
I0212 14:23:44.732541       8 log.go:172] (0xc001b46b00) (0xc0033f3720) Stream added, broadcasting: 1
I0212 14:23:44.759741       8 log.go:172] (0xc001b46b00) Reply frame received for 1
I0212 14:23:44.759914       8 log.go:172] (0xc001b46b00) (0xc0011d5180) Create stream
I0212 14:23:44.759947       8 log.go:172] (0xc001b46b00) (0xc0011d5180) Stream added, broadcasting: 3
I0212 14:23:44.762465       8 log.go:172] (0xc001b46b00) Reply frame received for 3
I0212 14:23:44.762488       8 log.go:172] (0xc001b46b00) (0xc00240d5e0) Create stream
I0212 14:23:44.762496       8 log.go:172] (0xc001b46b00) (0xc00240d5e0) Stream added, broadcasting: 5
I0212 14:23:44.764537       8 log.go:172] (0xc001b46b00) Reply frame received for 5
I0212 14:23:44.872917       8 log.go:172] (0xc001b46b00) Data frame received for 3
I0212 14:23:44.873292       8 log.go:172] (0xc0011d5180) (3) Data frame handling
I0212 14:23:44.873314       8 log.go:172] (0xc0011d5180) (3) Data frame sent
I0212 14:23:44.990014       8 log.go:172] (0xc001b46b00) (0xc0011d5180) Stream removed, broadcasting: 3
I0212 14:23:44.990129       8 log.go:172] (0xc001b46b00) Data frame received for 1
I0212 14:23:44.990162       8 log.go:172] (0xc0033f3720) (1) Data frame handling
I0212 14:23:44.990174       8 log.go:172] (0xc0033f3720) (1) Data frame sent
I0212 14:23:44.990179       8 log.go:172] (0xc001b46b00) (0xc0033f3720) Stream removed, broadcasting: 1
I0212 14:23:44.990217       8 log.go:172] (0xc001b46b00) (0xc00240d5e0) Stream removed, broadcasting: 5
I0212 14:23:44.990240       8 log.go:172] (0xc001b46b00) Go away received
I0212 14:23:44.990397       8 log.go:172] (0xc001b46b00) (0xc0033f3720) Stream removed, broadcasting: 1
I0212 14:23:44.990414       8 log.go:172] (0xc001b46b00) (0xc0011d5180) Stream removed, broadcasting: 3
I0212 14:23:44.990423       8 log.go:172] (0xc001b46b00) (0xc00240d5e0) Stream removed, broadcasting: 5
Feb 12 14:23:44.990: INFO: Exec stderr: ""
Feb 12 14:23:44.990: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:44.990: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:45.037665       8 log.go:172] (0xc001dcfce0) (0xc0011d57c0) Create stream
I0212 14:23:45.037746       8 log.go:172] (0xc001dcfce0) (0xc0011d57c0) Stream added, broadcasting: 1
I0212 14:23:45.049726       8 log.go:172] (0xc001dcfce0) Reply frame received for 1
I0212 14:23:45.049904       8 log.go:172] (0xc001dcfce0) (0xc00221dea0) Create stream
I0212 14:23:45.049935       8 log.go:172] (0xc001dcfce0) (0xc00221dea0) Stream added, broadcasting: 3
I0212 14:23:45.056462       8 log.go:172] (0xc001dcfce0) Reply frame received for 3
I0212 14:23:45.056506       8 log.go:172] (0xc001dcfce0) (0xc0033f37c0) Create stream
I0212 14:23:45.056525       8 log.go:172] (0xc001dcfce0) (0xc0033f37c0) Stream added, broadcasting: 5
I0212 14:23:45.066731       8 log.go:172] (0xc001dcfce0) Reply frame received for 5
I0212 14:23:45.155989       8 log.go:172] (0xc001dcfce0) Data frame received for 3
I0212 14:23:45.156049       8 log.go:172] (0xc00221dea0) (3) Data frame handling
I0212 14:23:45.156068       8 log.go:172] (0xc00221dea0) (3) Data frame sent
I0212 14:23:45.261654       8 log.go:172] (0xc001dcfce0) (0xc00221dea0) Stream removed, broadcasting: 3
I0212 14:23:45.261727       8 log.go:172] (0xc001dcfce0) Data frame received for 1
I0212 14:23:45.261744       8 log.go:172] (0xc0011d57c0) (1) Data frame handling
I0212 14:23:45.261756       8 log.go:172] (0xc0011d57c0) (1) Data frame sent
I0212 14:23:45.261769       8 log.go:172] (0xc001dcfce0) (0xc0011d57c0) Stream removed, broadcasting: 1
I0212 14:23:45.261785       8 log.go:172] (0xc001dcfce0) (0xc0033f37c0) Stream removed, broadcasting: 5
I0212 14:23:45.261815       8 log.go:172] (0xc001dcfce0) Go away received
I0212 14:23:45.261898       8 log.go:172] (0xc001dcfce0) (0xc0011d57c0) Stream removed, broadcasting: 1
I0212 14:23:45.261908       8 log.go:172] (0xc001dcfce0) (0xc00221dea0) Stream removed, broadcasting: 3
I0212 14:23:45.261911       8 log.go:172] (0xc001dcfce0) (0xc0033f37c0) Stream removed, broadcasting: 5
Feb 12 14:23:45.261: INFO: Exec stderr: ""
Feb 12 14:23:45.261: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:45.262: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:45.314731       8 log.go:172] (0xc002eeab00) (0xc0025f4000) Create stream
I0212 14:23:45.314799       8 log.go:172] (0xc002eeab00) (0xc0025f4000) Stream added, broadcasting: 1
I0212 14:23:45.321643       8 log.go:172] (0xc002eeab00) Reply frame received for 1
I0212 14:23:45.321668       8 log.go:172] (0xc002eeab00) (0xc0033f3860) Create stream
I0212 14:23:45.321674       8 log.go:172] (0xc002eeab00) (0xc0033f3860) Stream added, broadcasting: 3
I0212 14:23:45.322910       8 log.go:172] (0xc002eeab00) Reply frame received for 3
I0212 14:23:45.322927       8 log.go:172] (0xc002eeab00) (0xc00240d720) Create stream
I0212 14:23:45.322933       8 log.go:172] (0xc002eeab00) (0xc00240d720) Stream added, broadcasting: 5
I0212 14:23:45.324045       8 log.go:172] (0xc002eeab00) Reply frame received for 5
I0212 14:23:45.419673       8 log.go:172] (0xc002eeab00) Data frame received for 3
I0212 14:23:45.419698       8 log.go:172] (0xc0033f3860) (3) Data frame handling
I0212 14:23:45.419709       8 log.go:172] (0xc0033f3860) (3) Data frame sent
I0212 14:23:45.511104       8 log.go:172] (0xc002eeab00) (0xc0033f3860) Stream removed, broadcasting: 3
I0212 14:23:45.511200       8 log.go:172] (0xc002eeab00) Data frame received for 1
I0212 14:23:45.511225       8 log.go:172] (0xc0025f4000) (1) Data frame handling
I0212 14:23:45.511247       8 log.go:172] (0xc0025f4000) (1) Data frame sent
I0212 14:23:45.511259       8 log.go:172] (0xc002eeab00) (0xc00240d720) Stream removed, broadcasting: 5
I0212 14:23:45.511296       8 log.go:172] (0xc002eeab00) (0xc0025f4000) Stream removed, broadcasting: 1
I0212 14:23:45.511328       8 log.go:172] (0xc002eeab00) Go away received
I0212 14:23:45.511451       8 log.go:172] (0xc002eeab00) (0xc0025f4000) Stream removed, broadcasting: 1
I0212 14:23:45.511465       8 log.go:172] (0xc002eeab00) (0xc0033f3860) Stream removed, broadcasting: 3
I0212 14:23:45.511473       8 log.go:172] (0xc002eeab00) (0xc00240d720) Stream removed, broadcasting: 5
Feb 12 14:23:45.511: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 12 14:23:45.511: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:45.511: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:45.580519       8 log.go:172] (0xc002eeb4a0) (0xc0025f4500) Create stream
I0212 14:23:45.580671       8 log.go:172] (0xc002eeb4a0) (0xc0025f4500) Stream added, broadcasting: 1
I0212 14:23:45.586715       8 log.go:172] (0xc002eeb4a0) Reply frame received for 1
I0212 14:23:45.586775       8 log.go:172] (0xc002eeb4a0) (0xc0025f45a0) Create stream
I0212 14:23:45.586790       8 log.go:172] (0xc002eeb4a0) (0xc0025f45a0) Stream added, broadcasting: 3
I0212 14:23:45.588654       8 log.go:172] (0xc002eeb4a0) Reply frame received for 3
I0212 14:23:45.588700       8 log.go:172] (0xc002eeb4a0) (0xc0025f46e0) Create stream
I0212 14:23:45.588718       8 log.go:172] (0xc002eeb4a0) (0xc0025f46e0) Stream added, broadcasting: 5
I0212 14:23:45.591032       8 log.go:172] (0xc002eeb4a0) Reply frame received for 5
I0212 14:23:45.684804       8 log.go:172] (0xc002eeb4a0) Data frame received for 3
I0212 14:23:45.684869       8 log.go:172] (0xc0025f45a0) (3) Data frame handling
I0212 14:23:45.684886       8 log.go:172] (0xc0025f45a0) (3) Data frame sent
I0212 14:23:45.803547       8 log.go:172] (0xc002eeb4a0) (0xc0025f45a0) Stream removed, broadcasting: 3
I0212 14:23:45.803679       8 log.go:172] (0xc002eeb4a0) Data frame received for 1
I0212 14:23:45.803692       8 log.go:172] (0xc0025f4500) (1) Data frame handling
I0212 14:23:45.803706       8 log.go:172] (0xc0025f4500) (1) Data frame sent
I0212 14:23:45.803716       8 log.go:172] (0xc002eeb4a0) (0xc0025f4500) Stream removed, broadcasting: 1
I0212 14:23:45.803819       8 log.go:172] (0xc002eeb4a0) (0xc0025f46e0) Stream removed, broadcasting: 5
I0212 14:23:45.803843       8 log.go:172] (0xc002eeb4a0) (0xc0025f4500) Stream removed, broadcasting: 1
I0212 14:23:45.803853       8 log.go:172] (0xc002eeb4a0) (0xc0025f45a0) Stream removed, broadcasting: 3
I0212 14:23:45.803861       8 log.go:172] (0xc002eeb4a0) (0xc0025f46e0) Stream removed, broadcasting: 5
I0212 14:23:45.804051       8 log.go:172] (0xc002eeb4a0) Go away received
Feb 12 14:23:45.804: INFO: Exec stderr: ""
Feb 12 14:23:45.804: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:45.804: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:45.875551       8 log.go:172] (0xc0002738c0) (0xc00240d7c0) Create stream
I0212 14:23:45.875735       8 log.go:172] (0xc0002738c0) (0xc00240d7c0) Stream added, broadcasting: 1
I0212 14:23:45.883431       8 log.go:172] (0xc0002738c0) Reply frame received for 1
I0212 14:23:45.883525       8 log.go:172] (0xc0002738c0) (0xc0033f3a40) Create stream
I0212 14:23:45.883534       8 log.go:172] (0xc0002738c0) (0xc0033f3a40) Stream added, broadcasting: 3
I0212 14:23:45.885662       8 log.go:172] (0xc0002738c0) Reply frame received for 3
I0212 14:23:45.885704       8 log.go:172] (0xc0002738c0) (0xc0025f48c0) Create stream
I0212 14:23:45.885714       8 log.go:172] (0xc0002738c0) (0xc0025f48c0) Stream added, broadcasting: 5
I0212 14:23:45.886945       8 log.go:172] (0xc0002738c0) Reply frame received for 5
I0212 14:23:45.971496       8 log.go:172] (0xc0002738c0) Data frame received for 3
I0212 14:23:45.971617       8 log.go:172] (0xc0033f3a40) (3) Data frame handling
I0212 14:23:45.971638       8 log.go:172] (0xc0033f3a40) (3) Data frame sent
I0212 14:23:46.075756       8 log.go:172] (0xc0002738c0) (0xc0033f3a40) Stream removed, broadcasting: 3
I0212 14:23:46.076317       8 log.go:172] (0xc0002738c0) (0xc0025f48c0) Stream removed, broadcasting: 5
I0212 14:23:46.076569       8 log.go:172] (0xc0002738c0) Data frame received for 1
I0212 14:23:46.076912       8 log.go:172] (0xc00240d7c0) (1) Data frame handling
I0212 14:23:46.076961       8 log.go:172] (0xc00240d7c0) (1) Data frame sent
I0212 14:23:46.076985       8 log.go:172] (0xc0002738c0) (0xc00240d7c0) Stream removed, broadcasting: 1
I0212 14:23:46.077030       8 log.go:172] (0xc0002738c0) Go away received
I0212 14:23:46.077227       8 log.go:172] (0xc0002738c0) (0xc00240d7c0) Stream removed, broadcasting: 1
I0212 14:23:46.077267       8 log.go:172] (0xc0002738c0) (0xc0033f3a40) Stream removed, broadcasting: 3
I0212 14:23:46.077304       8 log.go:172] (0xc0002738c0) (0xc0025f48c0) Stream removed, broadcasting: 5
Feb 12 14:23:46.077: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 12 14:23:46.077: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:46.077: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:46.137684       8 log.go:172] (0xc0009db810) (0xc0016181e0) Create stream
I0212 14:23:46.137852       8 log.go:172] (0xc0009db810) (0xc0016181e0) Stream added, broadcasting: 1
I0212 14:23:46.143671       8 log.go:172] (0xc0009db810) Reply frame received for 1
I0212 14:23:46.143704       8 log.go:172] (0xc0009db810) (0xc001e2ef00) Create stream
I0212 14:23:46.143713       8 log.go:172] (0xc0009db810) (0xc001e2ef00) Stream added, broadcasting: 3
I0212 14:23:46.146796       8 log.go:172] (0xc0009db810) Reply frame received for 3
I0212 14:23:46.146838       8 log.go:172] (0xc0009db810) (0xc00240d900) Create stream
I0212 14:23:46.146848       8 log.go:172] (0xc0009db810) (0xc00240d900) Stream added, broadcasting: 5
I0212 14:23:46.149177       8 log.go:172] (0xc0009db810) Reply frame received for 5
I0212 14:23:46.234853       8 log.go:172] (0xc0009db810) Data frame received for 3
I0212 14:23:46.234942       8 log.go:172] (0xc001e2ef00) (3) Data frame handling
I0212 14:23:46.234951       8 log.go:172] (0xc001e2ef00) (3) Data frame sent
I0212 14:23:46.356511       8 log.go:172] (0xc0009db810) (0xc001e2ef00) Stream removed, broadcasting: 3
I0212 14:23:46.356727       8 log.go:172] (0xc0009db810) Data frame received for 1
I0212 14:23:46.356762       8 log.go:172] (0xc0016181e0) (1) Data frame handling
I0212 14:23:46.356988       8 log.go:172] (0xc0016181e0) (1) Data frame sent
I0212 14:23:46.357188       8 log.go:172] (0xc0009db810) (0xc00240d900) Stream removed, broadcasting: 5
I0212 14:23:46.357263       8 log.go:172] (0xc0009db810) (0xc0016181e0) Stream removed, broadcasting: 1
I0212 14:23:46.357293       8 log.go:172] (0xc0009db810) Go away received
I0212 14:23:46.357397       8 log.go:172] (0xc0009db810) (0xc0016181e0) Stream removed, broadcasting: 1
I0212 14:23:46.357430       8 log.go:172] (0xc0009db810) (0xc001e2ef00) Stream removed, broadcasting: 3
I0212 14:23:46.357442       8 log.go:172] (0xc0009db810) (0xc00240d900) Stream removed, broadcasting: 5
Feb 12 14:23:46.357: INFO: Exec stderr: ""
Feb 12 14:23:46.357: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:46.357: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:46.410453       8 log.go:172] (0xc000d5ab00) (0xc001e2f2c0) Create stream
I0212 14:23:46.410492       8 log.go:172] (0xc000d5ab00) (0xc001e2f2c0) Stream added, broadcasting: 1
I0212 14:23:46.415862       8 log.go:172] (0xc000d5ab00) Reply frame received for 1
I0212 14:23:46.415886       8 log.go:172] (0xc000d5ab00) (0xc0025f4960) Create stream
I0212 14:23:46.415897       8 log.go:172] (0xc000d5ab00) (0xc0025f4960) Stream added, broadcasting: 3
I0212 14:23:46.417203       8 log.go:172] (0xc000d5ab00) Reply frame received for 3
I0212 14:23:46.417245       8 log.go:172] (0xc000d5ab00) (0xc0025f4a00) Create stream
I0212 14:23:46.417257       8 log.go:172] (0xc000d5ab00) (0xc0025f4a00) Stream added, broadcasting: 5
I0212 14:23:46.418773       8 log.go:172] (0xc000d5ab00) Reply frame received for 5
I0212 14:23:46.642498       8 log.go:172] (0xc000d5ab00) Data frame received for 3
I0212 14:23:46.642751       8 log.go:172] (0xc0025f4960) (3) Data frame handling
I0212 14:23:46.642801       8 log.go:172] (0xc0025f4960) (3) Data frame sent
I0212 14:23:46.780398       8 log.go:172] (0xc000d5ab00) (0xc0025f4960) Stream removed, broadcasting: 3
I0212 14:23:46.780533       8 log.go:172] (0xc000d5ab00) Data frame received for 1
I0212 14:23:46.780560       8 log.go:172] (0xc001e2f2c0) (1) Data frame handling
I0212 14:23:46.780578       8 log.go:172] (0xc001e2f2c0) (1) Data frame sent
I0212 14:23:46.780622       8 log.go:172] (0xc000d5ab00) (0xc001e2f2c0) Stream removed, broadcasting: 1
I0212 14:23:46.780784       8 log.go:172] (0xc000d5ab00) (0xc0025f4a00) Stream removed, broadcasting: 5
I0212 14:23:46.780825       8 log.go:172] (0xc000d5ab00) (0xc001e2f2c0) Stream removed, broadcasting: 1
I0212 14:23:46.780835       8 log.go:172] (0xc000d5ab00) (0xc0025f4960) Stream removed, broadcasting: 3
I0212 14:23:46.780845       8 log.go:172] (0xc000d5ab00) (0xc0025f4a00) Stream removed, broadcasting: 5
I0212 14:23:46.781124       8 log.go:172] (0xc000d5ab00) Go away received
Feb 12 14:23:46.781: INFO: Exec stderr: ""
Feb 12 14:23:46.781: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:46.781: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:46.849226       8 log.go:172] (0xc001f6f550) (0xc0025f4f00) Create stream
I0212 14:23:46.849311       8 log.go:172] (0xc001f6f550) (0xc0025f4f00) Stream added, broadcasting: 1
I0212 14:23:46.861831       8 log.go:172] (0xc001f6f550) Reply frame received for 1
I0212 14:23:46.861941       8 log.go:172] (0xc001f6f550) (0xc00240d9a0) Create stream
I0212 14:23:46.861951       8 log.go:172] (0xc001f6f550) (0xc00240d9a0) Stream added, broadcasting: 3
I0212 14:23:46.865585       8 log.go:172] (0xc001f6f550) Reply frame received for 3
I0212 14:23:46.865607       8 log.go:172] (0xc001f6f550) (0xc00240da40) Create stream
I0212 14:23:46.865617       8 log.go:172] (0xc001f6f550) (0xc00240da40) Stream added, broadcasting: 5
I0212 14:23:46.867501       8 log.go:172] (0xc001f6f550) Reply frame received for 5
I0212 14:23:46.960553       8 log.go:172] (0xc001f6f550) Data frame received for 3
I0212 14:23:46.960623       8 log.go:172] (0xc00240d9a0) (3) Data frame handling
I0212 14:23:46.960657       8 log.go:172] (0xc00240d9a0) (3) Data frame sent
I0212 14:23:47.109919       8 log.go:172] (0xc001f6f550) (0xc00240d9a0) Stream removed, broadcasting: 3
I0212 14:23:47.110405       8 log.go:172] (0xc001f6f550) Data frame received for 1
I0212 14:23:47.110518       8 log.go:172] (0xc001f6f550) (0xc00240da40) Stream removed, broadcasting: 5
I0212 14:23:47.110616       8 log.go:172] (0xc0025f4f00) (1) Data frame handling
I0212 14:23:47.110630       8 log.go:172] (0xc0025f4f00) (1) Data frame sent
I0212 14:23:47.110641       8 log.go:172] (0xc001f6f550) (0xc0025f4f00) Stream removed, broadcasting: 1
I0212 14:23:47.110652       8 log.go:172] (0xc001f6f550) Go away received
I0212 14:23:47.110876       8 log.go:172] (0xc001f6f550) (0xc0025f4f00) Stream removed, broadcasting: 1
I0212 14:23:47.110893       8 log.go:172] (0xc001f6f550) (0xc00240d9a0) Stream removed, broadcasting: 3
I0212 14:23:47.110904       8 log.go:172] (0xc001f6f550) (0xc00240da40) Stream removed, broadcasting: 5
Feb 12 14:23:47.110: INFO: Exec stderr: ""
Feb 12 14:23:47.110: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-946 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 14:23:47.111: INFO: >>> kubeConfig: /root/.kube/config
I0212 14:23:47.193688       8 log.go:172] (0xc002eee0b0) (0xc0025f5540) Create stream
I0212 14:23:47.193827       8 log.go:172] (0xc002eee0b0) (0xc0025f5540) Stream added, broadcasting: 1
I0212 14:23:47.203667       8 log.go:172] (0xc002eee0b0) Reply frame received for 1
I0212 14:23:47.203710       8 log.go:172] (0xc002eee0b0) (0xc0033f3ae0) Create stream
I0212 14:23:47.203735       8 log.go:172] (0xc002eee0b0) (0xc0033f3ae0) Stream added, broadcasting: 3
I0212 14:23:47.204968       8 log.go:172] (0xc002eee0b0) Reply frame received for 3
I0212 14:23:47.204993       8 log.go:172] (0xc002eee0b0) (0xc001e2f360) Create stream
I0212 14:23:47.205000       8 log.go:172] (0xc002eee0b0) (0xc001e2f360) Stream added, broadcasting: 5
I0212 14:23:47.206121       8 log.go:172] (0xc002eee0b0) Reply frame received for 5
I0212 14:23:47.300208       8 log.go:172] (0xc002eee0b0) Data frame received for 3
I0212 14:23:47.300304       8 log.go:172] (0xc0033f3ae0) (3) Data frame handling
I0212 14:23:47.300326       8 log.go:172] (0xc0033f3ae0) (3) Data frame sent
I0212 14:23:47.433122       8 log.go:172] (0xc002eee0b0) Data frame received for 1
I0212 14:23:47.433211       8 log.go:172] (0xc0025f5540) (1) Data frame handling
I0212 14:23:47.433242       8 log.go:172] (0xc0025f5540) (1) Data frame sent
I0212 14:23:47.433476       8 log.go:172] (0xc002eee0b0) (0xc0025f5540) Stream removed, broadcasting: 1
I0212 14:23:47.433534       8 log.go:172] (0xc002eee0b0) (0xc0033f3ae0) Stream removed, broadcasting: 3
I0212 14:23:47.434083       8 log.go:172] (0xc002eee0b0) (0xc001e2f360) Stream removed, broadcasting: 5
I0212 14:23:47.434214       8 log.go:172] (0xc002eee0b0) (0xc0025f5540) Stream removed, broadcasting: 1
I0212 14:23:47.434278       8 log.go:172] (0xc002eee0b0) (0xc0033f3ae0) Stream removed, broadcasting: 3
I0212 14:23:47.434321       8 log.go:172] (0xc002eee0b0) (0xc001e2f360) Stream removed, broadcasting: 5
Feb 12 14:23:47.434: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:23:47.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 14:23:47.435043       8 log.go:172] (0xc002eee0b0) Go away received
STEP: Destroying namespace "e2e-kubelet-etc-hosts-946" for this suite.
Feb 12 14:24:31.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:24:31.607: INFO: namespace e2e-kubelet-etc-hosts-946 deletion completed in 44.163813985s

• [SLOW TEST:71.621 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:24:31.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:24:31.730: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 12 14:24:31.746: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 12 14:24:36.765: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 14:24:40.781: INFO: Creating deployment "test-rolling-update-deployment"
Feb 12 14:24:40.788: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 12 14:24:40.801: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 12 14:24:42.812: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 12 14:24:42.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 14:24:44.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 14:24:46.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717114280, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 14:24:48.823: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 12 14:24:48.833: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2395,SelfLink:/apis/apps/v1/namespaces/deployment-2395/deployments/test-rolling-update-deployment,UID:cb1992ce-72f7-447e-8d50-61110a2b7dc6,ResourceVersion:24082474,Generation:1,CreationTimestamp:2020-02-12 14:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-12 14:24:40 +0000 UTC 2020-02-12 14:24:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-12 14:24:48 +0000 UTC 2020-02-12 14:24:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 12 14:24:48.836: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2395,SelfLink:/apis/apps/v1/namespaces/deployment-2395/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:4600ac5e-c16a-44b5-b8b8-a4724c653e9b,ResourceVersion:24082463,Generation:1,CreationTimestamp:2020-02-12 14:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cb1992ce-72f7-447e-8d50-61110a2b7dc6 0xc001a30107 0xc001a30108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 12 14:24:48.836: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 12 14:24:48.836: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2395,SelfLink:/apis/apps/v1/namespaces/deployment-2395/replicasets/test-rolling-update-controller,UID:d170a3e1-80f7-467b-aecd-23886eb37043,ResourceVersion:24082472,Generation:2,CreationTimestamp:2020-02-12 14:24:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cb1992ce-72f7-447e-8d50-61110a2b7dc6 0xc002179fc7 0xc002179fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 14:24:48.840: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-285v7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-285v7,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2395,SelfLink:/api/v1/namespaces/deployment-2395/pods/test-rolling-update-deployment-79f6b9d75c-285v7,UID:f192c5de-0d9f-4edf-b776-b4ccd2cf39f7,ResourceVersion:24082462,Generation:0,CreationTimestamp:2020-02-12 14:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 4600ac5e-c16a-44b5-b8b8-a4724c653e9b 0xc001a30f27 0xc001a30f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jr85z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jr85z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-jr85z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a30fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a30fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:24:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:24:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:24:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:24:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-12 14:24:40 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-12 14:24:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f362de4bb312573681781be64c886b4a098ee83be7252922fba39cc492b1ee2f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:24:48.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2395" for this suite.
Feb 12 14:24:54.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:24:55.028: INFO: namespace deployment-2395 deletion completed in 6.18415589s

• [SLOW TEST:23.421 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:24:55.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9791
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 12 14:24:55.310: INFO: Found 0 stateful pods, waiting for 3
Feb 12 14:25:05.329: INFO: Found 1 stateful pods, waiting for 3
Feb 12 14:25:15.322: INFO: Found 2 stateful pods, waiting for 3
Feb 12 14:25:25.316: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:25:25.316: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:25:25.316: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 14:25:35.320: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:25:35.320: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:25:35.320: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:25:35.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9791 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 14:25:35.720: INFO: stderr: "I0212 14:25:35.514539    3244 log.go:172] (0xc0007f0420) (0xc0007006e0) Create stream\nI0212 14:25:35.514802    3244 log.go:172] (0xc0007f0420) (0xc0007006e0) Stream added, broadcasting: 1\nI0212 14:25:35.517572    3244 log.go:172] (0xc0007f0420) Reply frame received for 1\nI0212 14:25:35.517605    3244 log.go:172] (0xc0007f0420) (0xc0005d6140) Create stream\nI0212 14:25:35.517613    3244 log.go:172] (0xc0007f0420) (0xc0005d6140) Stream added, broadcasting: 3\nI0212 14:25:35.518746    3244 log.go:172] (0xc0007f0420) Reply frame received for 3\nI0212 14:25:35.518764    3244 log.go:172] (0xc0007f0420) (0xc000700780) Create stream\nI0212 14:25:35.518769    3244 log.go:172] (0xc0007f0420) (0xc000700780) Stream added, broadcasting: 5\nI0212 14:25:35.519474    3244 log.go:172] (0xc0007f0420) Reply frame received for 5\nI0212 14:25:35.600830    3244 log.go:172] (0xc0007f0420) Data frame received for 5\nI0212 14:25:35.600964    3244 log.go:172] (0xc000700780) (5) Data frame handling\nI0212 14:25:35.600992    3244 log.go:172] (0xc000700780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:25:35.628527    3244 log.go:172] (0xc0007f0420) Data frame received for 3\nI0212 14:25:35.628687    3244 log.go:172] (0xc0005d6140) (3) Data frame handling\nI0212 14:25:35.628716    3244 log.go:172] (0xc0005d6140) (3) Data frame sent\nI0212 14:25:35.711773    3244 log.go:172] (0xc0007f0420) Data frame received for 1\nI0212 14:25:35.711841    3244 log.go:172] (0xc0007006e0) (1) Data frame handling\nI0212 14:25:35.711858    3244 log.go:172] (0xc0007006e0) (1) Data frame sent\nI0212 14:25:35.712196    3244 log.go:172] (0xc0007f0420) (0xc0007006e0) Stream removed, broadcasting: 1\nI0212 14:25:35.712902    3244 log.go:172] (0xc0007f0420) (0xc0005d6140) Stream removed, broadcasting: 3\nI0212 14:25:35.713021    3244 log.go:172] (0xc0007f0420) (0xc000700780) Stream removed, broadcasting: 5\nI0212 14:25:35.713071    3244 log.go:172] (0xc0007f0420) Go away received\nI0212 14:25:35.713409    3244 log.go:172] (0xc0007f0420) (0xc0007006e0) Stream removed, broadcasting: 1\nI0212 14:25:35.713535    3244 log.go:172] (0xc0007f0420) (0xc0005d6140) Stream removed, broadcasting: 3\nI0212 14:25:35.713543    3244 log.go:172] (0xc0007f0420) (0xc000700780) Stream removed, broadcasting: 5\n"
Feb 12 14:25:35.720: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 14:25:35.720: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 12 14:25:45.771: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 12 14:25:55.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9791 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:25:56.341: INFO: stderr: "I0212 14:25:56.154536    3260 log.go:172] (0xc000b56420) (0xc0004206e0) Create stream\nI0212 14:25:56.154964    3260 log.go:172] (0xc000b56420) (0xc0004206e0) Stream added, broadcasting: 1\nI0212 14:25:56.171779    3260 log.go:172] (0xc000b56420) Reply frame received for 1\nI0212 14:25:56.171837    3260 log.go:172] (0xc000b56420) (0xc000420000) Create stream\nI0212 14:25:56.171855    3260 log.go:172] (0xc000b56420) (0xc000420000) Stream added, broadcasting: 3\nI0212 14:25:56.172849    3260 log.go:172] (0xc000b56420) Reply frame received for 3\nI0212 14:25:56.172964    3260 log.go:172] (0xc000b56420) (0xc0005f4280) Create stream\nI0212 14:25:56.172985    3260 log.go:172] (0xc000b56420) (0xc0005f4280) Stream added, broadcasting: 5\nI0212 14:25:56.174468    3260 log.go:172] (0xc000b56420) Reply frame received for 5\nI0212 14:25:56.254234    3260 log.go:172] (0xc000b56420) Data frame received for 3\nI0212 14:25:56.254500    3260 log.go:172] (0xc000420000) (3) Data frame handling\nI0212 14:25:56.254591    3260 log.go:172] (0xc000420000) (3) Data frame sent\nI0212 14:25:56.254899    3260 log.go:172] (0xc000b56420) Data frame received for 5\nI0212 14:25:56.254937    3260 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0212 14:25:56.254988    3260 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 14:25:56.327703    3260 log.go:172] (0xc000b56420) (0xc000420000) Stream removed, broadcasting: 3\nI0212 14:25:56.328009    3260 log.go:172] (0xc000b56420) Data frame received for 1\nI0212 14:25:56.328309    3260 log.go:172] (0xc000b56420) (0xc0005f4280) Stream removed, broadcasting: 5\nI0212 14:25:56.328352    3260 log.go:172] (0xc0004206e0) (1) Data frame handling\nI0212 14:25:56.328371    3260 log.go:172] (0xc0004206e0) (1) Data frame sent\nI0212 14:25:56.328381    3260 log.go:172] (0xc000b56420) (0xc0004206e0) Stream removed, broadcasting: 1\nI0212 14:25:56.328400    3260 log.go:172] (0xc000b56420) Go away received\nI0212 14:25:56.330317    3260 log.go:172] (0xc000b56420) (0xc0004206e0) Stream removed, broadcasting: 1\nI0212 14:25:56.330336    3260 log.go:172] (0xc000b56420) (0xc000420000) Stream removed, broadcasting: 3\nI0212 14:25:56.330347    3260 log.go:172] (0xc000b56420) (0xc0005f4280) Stream removed, broadcasting: 5\n"
Feb 12 14:25:56.342: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 14:25:56.342: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 14:26:06.405: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:26:06.405: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:26:06.405: INFO: Waiting for Pod statefulset-9791/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:26:16.422: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:26:16.422: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:26:16.422: INFO: Waiting for Pod statefulset-9791/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:26:26.423: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:26:26.423: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:26:36.426: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 12 14:26:46.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9791 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 14:26:47.102: INFO: stderr: "I0212 14:26:46.794213    3282 log.go:172] (0xc00013ef20) (0xc0005b6960) Create stream\nI0212 14:26:46.794571    3282 log.go:172] (0xc00013ef20) (0xc0005b6960) Stream added, broadcasting: 1\nI0212 14:26:46.799848    3282 log.go:172] (0xc00013ef20) Reply frame received for 1\nI0212 14:26:46.799932    3282 log.go:172] (0xc00013ef20) (0xc000396140) Create stream\nI0212 14:26:46.799960    3282 log.go:172] (0xc00013ef20) (0xc000396140) Stream added, broadcasting: 3\nI0212 14:26:46.801335    3282 log.go:172] (0xc00013ef20) Reply frame received for 3\nI0212 14:26:46.801391    3282 log.go:172] (0xc00013ef20) (0xc000710000) Create stream\nI0212 14:26:46.801431    3282 log.go:172] (0xc00013ef20) (0xc000710000) Stream added, broadcasting: 5\nI0212 14:26:46.802673    3282 log.go:172] (0xc00013ef20) Reply frame received for 5\nI0212 14:26:46.944727    3282 log.go:172] (0xc00013ef20) Data frame received for 5\nI0212 14:26:46.944829    3282 log.go:172] (0xc000710000) (5) Data frame handling\nI0212 14:26:46.944846    3282 log.go:172] (0xc000710000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:26:46.976124    3282 log.go:172] (0xc00013ef20) Data frame received for 3\nI0212 14:26:46.976286    3282 log.go:172] (0xc000396140) (3) Data frame handling\nI0212 14:26:46.976320    3282 log.go:172] (0xc000396140) (3) Data frame sent\nI0212 14:26:47.092562    3282 log.go:172] (0xc00013ef20) Data frame received for 1\nI0212 14:26:47.092690    3282 log.go:172] (0xc00013ef20) (0xc000710000) Stream removed, broadcasting: 5\nI0212 14:26:47.092801    3282 log.go:172] (0xc00013ef20) (0xc000396140) Stream removed, broadcasting: 3\nI0212 14:26:47.092838    3282 log.go:172] (0xc0005b6960) (1) Data frame handling\nI0212 14:26:47.092854    3282 log.go:172] (0xc0005b6960) (1) Data frame sent\nI0212 14:26:47.092860    3282 log.go:172] (0xc00013ef20) (0xc0005b6960) Stream removed, broadcasting: 1\nI0212 14:26:47.092869    3282 log.go:172] (0xc00013ef20) Go away received\nI0212 14:26:47.093521    3282 log.go:172] (0xc00013ef20) (0xc0005b6960) Stream removed, broadcasting: 1\nI0212 14:26:47.093532    3282 log.go:172] (0xc00013ef20) (0xc000396140) Stream removed, broadcasting: 3\nI0212 14:26:47.093539    3282 log.go:172] (0xc00013ef20) (0xc000710000) Stream removed, broadcasting: 5\n"
Feb 12 14:26:47.102: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 14:26:47.102: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 14:26:57.138: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 12 14:27:07.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9791 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 14:27:07.650: INFO: stderr: "I0212 14:27:07.407230    3302 log.go:172] (0xc000a34370) (0xc0003ea6e0) Create stream\nI0212 14:27:07.407476    3302 log.go:172] (0xc000a34370) (0xc0003ea6e0) Stream added, broadcasting: 1\nI0212 14:27:07.430710    3302 log.go:172] (0xc000a34370) Reply frame received for 1\nI0212 14:27:07.430817    3302 log.go:172] (0xc000a34370) (0xc00063c280) Create stream\nI0212 14:27:07.430836    3302 log.go:172] (0xc000a34370) (0xc00063c280) Stream added, broadcasting: 3\nI0212 14:27:07.432313    3302 log.go:172] (0xc000a34370) Reply frame received for 3\nI0212 14:27:07.432345    3302 log.go:172] (0xc000a34370) (0xc0003ea000) Create stream\nI0212 14:27:07.432360    3302 log.go:172] (0xc000a34370) (0xc0003ea000) Stream added, broadcasting: 5\nI0212 14:27:07.439482    3302 log.go:172] (0xc000a34370) Reply frame received for 5\nI0212 14:27:07.508865    3302 log.go:172] (0xc000a34370) Data frame received for 3\nI0212 14:27:07.508970    3302 log.go:172] (0xc00063c280) (3) Data frame handling\nI0212 14:27:07.509016    3302 log.go:172] (0xc00063c280) (3) Data frame sent\nI0212 14:27:07.509233    3302 log.go:172] (0xc000a34370) Data frame received for 5\nI0212 14:27:07.509250    3302 log.go:172] (0xc0003ea000) (5) Data frame handling\nI0212 14:27:07.509263    3302 log.go:172] (0xc0003ea000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 14:27:07.625542    3302 log.go:172] (0xc000a34370) (0xc00063c280) Stream removed, broadcasting: 3\nI0212 14:27:07.626144    3302 log.go:172] (0xc000a34370) Data frame received for 1\nI0212 14:27:07.626180    3302 log.go:172] (0xc0003ea6e0) (1) Data frame handling\nI0212 14:27:07.626218    3302 log.go:172] (0xc0003ea6e0) (1) Data frame sent\nI0212 14:27:07.626241    3302 log.go:172] (0xc000a34370) (0xc0003ea6e0) Stream removed, broadcasting: 1\nI0212 14:27:07.626657    3302 log.go:172] (0xc000a34370) (0xc0003ea000) Stream removed, broadcasting: 5\nI0212 14:27:07.627149    3302 log.go:172] (0xc000a34370) Go away received\nI0212 14:27:07.628298    3302 log.go:172] (0xc000a34370) (0xc0003ea6e0) Stream removed, broadcasting: 1\nI0212 14:27:07.628323    3302 log.go:172] (0xc000a34370) (0xc00063c280) Stream removed, broadcasting: 3\nI0212 14:27:07.628337    3302 log.go:172] (0xc000a34370) (0xc0003ea000) Stream removed, broadcasting: 5\n"
Feb 12 14:27:07.650: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 14:27:07.651: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 14:27:07.758: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:27:07.758: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:07.758: INFO: Waiting for Pod statefulset-9791/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:07.758: INFO: Waiting for Pod statefulset-9791/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:17.846: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:27:17.846: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:17.846: INFO: Waiting for Pod statefulset-9791/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:27.773: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:27:27.773: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:27.773: INFO: Waiting for Pod statefulset-9791/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:37.788: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
Feb 12 14:27:37.788: INFO: Waiting for Pod statefulset-9791/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 12 14:27:47.800: INFO: Waiting for StatefulSet statefulset-9791/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 12 14:27:57.792: INFO: Deleting all statefulset in ns statefulset-9791
Feb 12 14:27:57.798: INFO: Scaling statefulset ss2 to 0
Feb 12 14:28:27.900: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 14:28:27.905: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:28:27.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9791" for this suite.
Feb 12 14:28:36.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:28:36.120: INFO: namespace statefulset-9791 deletion completed in 8.155433946s

• [SLOW TEST:221.092 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:28:36.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:28:46.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6166" for this suite.
Feb 12 14:29:28.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:29:28.549: INFO: namespace kubelet-test-6166 deletion completed in 42.178185766s

• [SLOW TEST:52.428 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:29:28.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:29:28.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1059" for this suite.
Feb 12 14:29:50.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:29:50.894: INFO: namespace pods-1059 deletion completed in 22.192787687s

• [SLOW TEST:22.345 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:29:50.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 12 14:29:51.068: INFO: Waiting up to 5m0s for pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b" in namespace "var-expansion-7424" to be "success or failure"
Feb 12 14:29:51.081: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.395348ms
Feb 12 14:29:53.089: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020664911s
Feb 12 14:29:55.097: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028716462s
Feb 12 14:29:57.106: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037697521s
Feb 12 14:29:59.117: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048074044s
Feb 12 14:30:01.123: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054132247s
STEP: Saw pod success
Feb 12 14:30:01.123: INFO: Pod "var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b" satisfied condition "success or failure"
Feb 12 14:30:01.130: INFO: Trying to get logs from node iruya-node pod var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b container dapi-container: 
STEP: delete the pod
Feb 12 14:30:01.276: INFO: Waiting for pod var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b to disappear
Feb 12 14:30:01.298: INFO: Pod var-expansion-44392dbb-7640-41ab-a5e3-7c4661bd650b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:30:01.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7424" for this suite.
Feb 12 14:30:07.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:30:07.529: INFO: namespace var-expansion-7424 deletion completed in 6.221570347s

• [SLOW TEST:16.634 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:30:07.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:30:07.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2343" for this suite.
Feb 12 14:30:13.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:30:13.856: INFO: namespace services-2343 deletion completed in 6.244167226s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.326 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:30:13.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 12 14:30:14.029: INFO: Waiting up to 5m0s for pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f" in namespace "var-expansion-8929" to be "success or failure"
Feb 12 14:30:14.065: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.55631ms
Feb 12 14:30:16.082: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052316182s
Feb 12 14:30:18.092: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062319284s
Feb 12 14:30:20.099: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06999801s
Feb 12 14:30:22.109: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079878126s
Feb 12 14:30:24.118: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088651804s
STEP: Saw pod success
Feb 12 14:30:24.118: INFO: Pod "var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f" satisfied condition "success or failure"
Feb 12 14:30:24.123: INFO: Trying to get logs from node iruya-node pod var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f container dapi-container: 
STEP: delete the pod
Feb 12 14:30:24.176: INFO: Waiting for pod var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f to disappear
Feb 12 14:30:24.181: INFO: Pod var-expansion-330e1c46-7e9e-40f1-9a0e-e76ebc06006f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:30:24.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8929" for this suite.
Feb 12 14:30:30.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:30:30.408: INFO: namespace var-expansion-8929 deletion completed in 6.107020937s

• [SLOW TEST:16.551 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:30:30.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:30:30.553: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.557328ms)
Feb 12 14:30:30.591: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 38.181455ms)
Feb 12 14:30:30.598: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.867229ms)
Feb 12 14:30:30.605: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.859845ms)
Feb 12 14:30:30.620: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.099372ms)
Feb 12 14:30:30.628: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.442798ms)
Feb 12 14:30:30.634: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.613388ms)
Feb 12 14:30:30.638: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.071222ms)
Feb 12 14:30:30.643: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.958102ms)
Feb 12 14:30:30.649: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.640135ms)
Feb 12 14:30:30.655: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.968754ms)
Feb 12 14:30:30.660: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.958911ms)
Feb 12 14:30:30.665: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.727869ms)
Feb 12 14:30:30.669: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.385385ms)
Feb 12 14:30:30.674: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.238392ms)
Feb 12 14:30:30.678: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.101974ms)
Feb 12 14:30:30.683: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.71502ms)
Feb 12 14:30:30.689: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.763273ms)
Feb 12 14:30:30.695: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.031312ms)
Feb 12 14:30:30.703: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.007659ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:30:30.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3656" for this suite.
Feb 12 14:30:36.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:30:36.881: INFO: namespace proxy-3656 deletion completed in 6.172608156s

• [SLOW TEST:6.473 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:30:36.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0212 14:31:22.038636       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 14:31:22.038: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:31:22.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9739" for this suite.
Feb 12 14:31:32.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:31:33.075: INFO: namespace gc-9739 deletion completed in 11.023618937s

• [SLOW TEST:56.194 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:31:33.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3
Feb 12 14:31:33.542: INFO: Pod name my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3: Found 0 pods out of 1
Feb 12 14:31:38.561: INFO: Pod name my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3: Found 1 pods out of 1
Feb 12 14:31:38.561: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3" are running
Feb 12 14:31:49.407: INFO: Pod "my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3-vgxkd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:31:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:31:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:31:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:31:33 +0000 UTC Reason: Message:}])
Feb 12 14:31:49.407: INFO: Trying to dial the pod
Feb 12 14:31:54.449: INFO: Controller my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3: Got expected result from replica 1 [my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3-vgxkd]: "my-hostname-basic-1fecf1e5-41dc-4dec-aacd-b05438785cc3-vgxkd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:31:54.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6016" for this suite.
Feb 12 14:32:00.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:32:00.688: INFO: namespace replication-controller-6016 deletion completed in 6.229489051s

• [SLOW TEST:27.609 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:32:00.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-67f5ebf3-22fd-41fb-a1e9-3b67e7e057e0
STEP: Creating configMap with name cm-test-opt-upd-57842b1d-25e9-4e08-8b65-8b406a1f6274
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-67f5ebf3-22fd-41fb-a1e9-3b67e7e057e0
STEP: Updating configmap cm-test-opt-upd-57842b1d-25e9-4e08-8b65-8b406a1f6274
STEP: Creating configMap with name cm-test-opt-create-f71df7d2-1203-465a-8d72-48c902dd7653
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:33:45.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-54" for this suite.
Feb 12 14:34:08.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:34:08.418: INFO: namespace configmap-54 deletion completed in 22.924870593s

• [SLOW TEST:127.729 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:34:08.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1990, will wait for the garbage collector to delete the pods
Feb 12 14:34:18.628: INFO: Deleting Job.batch foo took: 17.1096ms
Feb 12 14:34:18.929: INFO: Terminating Job.batch foo pods took: 300.701961ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:35:06.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1990" for this suite.
Feb 12 14:35:12.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:35:12.803: INFO: namespace job-1990 deletion completed in 6.158237289s

• [SLOW TEST:64.384 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:35:12.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:35:12.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:35:23.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8382" for this suite.
Feb 12 14:36:07.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:36:07.528: INFO: namespace pods-8382 deletion completed in 44.147468084s

• [SLOW TEST:54.725 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:36:07.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:36:07.684: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 12 14:36:10.032: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:36:11.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7485" for this suite.
Feb 12 14:36:23.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:36:23.284: INFO: namespace replication-controller-7485 deletion completed in 12.171731039s

• [SLOW TEST:15.755 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:36:23.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4538
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4538
STEP: Creating statefulset with conflicting port in namespace statefulset-4538
STEP: Waiting until pod test-pod will start running in namespace statefulset-4538
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4538
Feb 12 14:36:37.440: INFO: Observed stateful pod in namespace: statefulset-4538, name: ss-0, uid: d380a29d-861a-407c-9acd-852ffb383675, status phase: Pending. Waiting for statefulset controller to delete.
Feb 12 14:36:37.879: INFO: Observed stateful pod in namespace: statefulset-4538, name: ss-0, uid: d380a29d-861a-407c-9acd-852ffb383675, status phase: Failed. Waiting for statefulset controller to delete.
Feb 12 14:36:37.937: INFO: Observed stateful pod in namespace: statefulset-4538, name: ss-0, uid: d380a29d-861a-407c-9acd-852ffb383675, status phase: Failed. Waiting for statefulset controller to delete.
Feb 12 14:36:37.962: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4538
STEP: Removing pod with conflicting port in namespace statefulset-4538
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4538 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 12 14:36:48.068: INFO: Deleting all statefulset in ns statefulset-4538
Feb 12 14:36:48.073: INFO: Scaling statefulset ss to 0
Feb 12 14:36:58.591: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 14:36:58.599: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:36:58.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4538" for this suite.
Feb 12 14:37:04.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:37:04.926: INFO: namespace statefulset-4538 deletion completed in 6.295129655s

• [SLOW TEST:41.641 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:37:04.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fc7370b7-fbe1-46c5-8826-0f0fba69b46d
STEP: Creating a pod to test consume secrets
Feb 12 14:37:05.051: INFO: Waiting up to 5m0s for pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7" in namespace "secrets-267" to be "success or failure"
Feb 12 14:37:05.077: INFO: Pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.638303ms
Feb 12 14:37:07.132: INFO: Pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08134316s
Feb 12 14:37:09.147: INFO: Pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09680741s
Feb 12 14:37:11.197: INFO: Pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146224062s
Feb 12 14:37:13.251: INFO: Pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.200678773s
STEP: Saw pod success
Feb 12 14:37:13.252: INFO: Pod "pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7" satisfied condition "success or failure"
Feb 12 14:37:13.256: INFO: Trying to get logs from node iruya-node pod pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7 container secret-volume-test: 
STEP: delete the pod
Feb 12 14:37:13.409: INFO: Waiting for pod pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7 to disappear
Feb 12 14:37:13.414: INFO: Pod pod-secrets-d85b2f44-777e-49df-80e4-2b2416e1a6d7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:37:13.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-267" for this suite.
Feb 12 14:37:19.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:37:19.598: INFO: namespace secrets-267 deletion completed in 6.170886496s

• [SLOW TEST:14.672 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:37:19.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:37:19.816: INFO: Create a RollingUpdate DaemonSet
Feb 12 14:37:19.840: INFO: Check that daemon pods launch on every node of the cluster
Feb 12 14:37:19.909: INFO: Number of nodes with available pods: 0
Feb 12 14:37:19.909: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:20.927: INFO: Number of nodes with available pods: 0
Feb 12 14:37:20.927: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:22.520: INFO: Number of nodes with available pods: 0
Feb 12 14:37:22.521: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:22.921: INFO: Number of nodes with available pods: 0
Feb 12 14:37:22.921: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:24.359: INFO: Number of nodes with available pods: 0
Feb 12 14:37:24.359: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:24.926: INFO: Number of nodes with available pods: 0
Feb 12 14:37:24.926: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:25.936: INFO: Number of nodes with available pods: 0
Feb 12 14:37:25.936: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:28.784: INFO: Number of nodes with available pods: 0
Feb 12 14:37:28.784: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:29.620: INFO: Number of nodes with available pods: 0
Feb 12 14:37:29.620: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:29.927: INFO: Number of nodes with available pods: 0
Feb 12 14:37:29.927: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:37:30.938: INFO: Number of nodes with available pods: 2
Feb 12 14:37:30.938: INFO: Number of running nodes: 2, number of available pods: 2
Feb 12 14:37:30.938: INFO: Update the DaemonSet to trigger a rollout
Feb 12 14:37:30.953: INFO: Updating DaemonSet daemon-set
Feb 12 14:37:48.105: INFO: Roll back the DaemonSet before rollout is complete
Feb 12 14:37:48.117: INFO: Updating DaemonSet daemon-set
Feb 12 14:37:48.117: INFO: Make sure DaemonSet rollback is complete
Feb 12 14:37:48.127: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:48.127: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:49.149: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:49.149: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:50.147: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:50.147: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:51.149: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:51.149: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:52.143: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:52.143: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:53.145: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:53.145: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:54.142: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:54.142: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:55.143: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:55.143: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:56.147: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:56.147: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:57.143: INFO: Wrong image for pod: daemon-set-ggs2d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 12 14:37:57.143: INFO: Pod daemon-set-ggs2d is not available
Feb 12 14:37:58.162: INFO: Pod daemon-set-gd75s is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-947, will wait for the garbage collector to delete the pods
Feb 12 14:37:58.283: INFO: Deleting DaemonSet.extensions daemon-set took: 14.739601ms
Feb 12 14:38:01.283: INFO: Terminating DaemonSet.extensions daemon-set pods took: 3.000459853s
Feb 12 14:38:07.192: INFO: Number of nodes with available pods: 0
Feb 12 14:38:07.192: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 14:38:07.197: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-947/daemonsets","resourceVersion":"24084719"},"items":null}

Feb 12 14:38:07.200: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-947/pods","resourceVersion":"24084719"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:38:07.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-947" for this suite.
Feb 12 14:38:13.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:38:13.406: INFO: namespace daemonsets-947 deletion completed in 6.176075762s

• [SLOW TEST:53.807 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:38:13.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 12 14:38:13.500: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 14:38:13.587: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 14:38:13.599: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 12 14:38:13.628: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.629: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 14:38:13.629: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 12 14:38:13.629: INFO: 	Container weave ready: true, restart count 0
Feb 12 14:38:13.629: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 14:38:13.629: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.629: INFO: 	Container kube-bench ready: false, restart count 0
Feb 12 14:38:13.629: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 12 14:38:13.646: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 12 14:38:13.646: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container coredns ready: true, restart count 0
Feb 12 14:38:13.646: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container etcd ready: true, restart count 0
Feb 12 14:38:13.646: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container weave ready: true, restart count 0
Feb 12 14:38:13.646: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 14:38:13.646: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container coredns ready: true, restart count 0
Feb 12 14:38:13.646: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 12 14:38:13.646: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 14:38:13.646: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 12 14:38:13.646: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-705fb7d5-4f84-41b6-811f-bfef6c313487 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-705fb7d5-4f84-41b6-811f-bfef6c313487 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-705fb7d5-4f84-41b6-811f-bfef6c313487
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:38:36.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2379" for this suite.
Feb 12 14:39:06.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:39:06.800: INFO: namespace sched-pred-2379 deletion completed in 30.164261575s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:53.393 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:39:06.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5415
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5415 to expose endpoints map[]
Feb 12 14:39:06.947: INFO: Get endpoints failed (4.989654ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 12 14:39:07.957: INFO: successfully validated that service multi-endpoint-test in namespace services-5415 exposes endpoints map[] (1.014691024s elapsed)
STEP: Creating pod pod1 in namespace services-5415
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5415 to expose endpoints map[pod1:[100]]
Feb 12 14:39:12.215: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.240306721s elapsed, will retry)
Feb 12 14:39:16.356: INFO: successfully validated that service multi-endpoint-test in namespace services-5415 exposes endpoints map[pod1:[100]] (8.381116974s elapsed)
STEP: Creating pod pod2 in namespace services-5415
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5415 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 12 14:39:21.792: INFO: Unexpected endpoints: found map[ffb4ec1f-4c80-493f-9543-c790b3b34abe:[100]], expected map[pod1:[100] pod2:[101]] (5.429653436s elapsed, will retry)
Feb 12 14:39:23.884: INFO: successfully validated that service multi-endpoint-test in namespace services-5415 exposes endpoints map[pod1:[100] pod2:[101]] (7.521524899s elapsed)
STEP: Deleting pod pod1 in namespace services-5415
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5415 to expose endpoints map[pod2:[101]]
Feb 12 14:39:24.933: INFO: successfully validated that service multi-endpoint-test in namespace services-5415 exposes endpoints map[pod2:[101]] (1.03586418s elapsed)
STEP: Deleting pod pod2 in namespace services-5415
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5415 to expose endpoints map[]
Feb 12 14:39:27.734: INFO: successfully validated that service multi-endpoint-test in namespace services-5415 exposes endpoints map[] (2.794467578s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:39:28.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5415" for this suite.
Feb 12 14:39:50.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:39:51.402: INFO: namespace services-5415 deletion completed in 22.758707233s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:44.602 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:39:51.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 12 14:39:51.471: INFO: Waiting up to 5m0s for pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805" in namespace "containers-2149" to be "success or failure"
Feb 12 14:39:51.489: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805": Phase="Pending", Reason="", readiness=false. Elapsed: 18.589512ms
Feb 12 14:39:53.497: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026357236s
Feb 12 14:39:55.506: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034960574s
Feb 12 14:39:57.512: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041648646s
Feb 12 14:39:59.520: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049653337s
Feb 12 14:40:01.528: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057224193s
STEP: Saw pod success
Feb 12 14:40:01.528: INFO: Pod "client-containers-b6661046-b069-4933-b397-b3af9c1f0805" satisfied condition "success or failure"
Feb 12 14:40:01.532: INFO: Trying to get logs from node iruya-node pod client-containers-b6661046-b069-4933-b397-b3af9c1f0805 container test-container: 
STEP: delete the pod
Feb 12 14:40:01.717: INFO: Waiting for pod client-containers-b6661046-b069-4933-b397-b3af9c1f0805 to disappear
Feb 12 14:40:01.728: INFO: Pod client-containers-b6661046-b069-4933-b397-b3af9c1f0805 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:40:01.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2149" for this suite.
Feb 12 14:40:07.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:40:07.928: INFO: namespace containers-2149 deletion completed in 6.189904285s

• [SLOW TEST:16.525 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:40:07.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 12 14:40:08.052: INFO: Waiting up to 5m0s for pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80" in namespace "emptydir-7051" to be "success or failure"
Feb 12 14:40:08.067: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80": Phase="Pending", Reason="", readiness=false. Elapsed: 14.829357ms
Feb 12 14:40:10.077: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024501585s
Feb 12 14:40:12.086: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03357118s
Feb 12 14:40:14.097: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045103869s
Feb 12 14:40:16.111: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058295345s
Feb 12 14:40:18.120: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067842432s
STEP: Saw pod success
Feb 12 14:40:18.120: INFO: Pod "pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80" satisfied condition "success or failure"
Feb 12 14:40:18.128: INFO: Trying to get logs from node iruya-node pod pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80 container test-container: 
STEP: delete the pod
Feb 12 14:40:18.214: INFO: Waiting for pod pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80 to disappear
Feb 12 14:40:18.220: INFO: Pod pod-39388ecd-3ef2-4e9e-a666-3b583a3abe80 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:40:18.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7051" for this suite.
Feb 12 14:40:24.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:40:24.417: INFO: namespace emptydir-7051 deletion completed in 6.18983786s

• [SLOW TEST:16.489 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:40:24.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3675.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3675.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 14:40:36.595: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.616: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.628: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.636: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.647: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.655: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.660: INFO: Unable to read jessie_udp@PodARecord from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.666: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c: the server could not find the requested resource (get pods dns-test-a55444cd-e675-4047-9045-28e047ef6e6c)
Feb 12 14:40:36.666: INFO: Lookups using dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 12 14:40:41.761: INFO: DNS probes using dns-3675/dns-test-a55444cd-e675-4047-9045-28e047ef6e6c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:40:41.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3675" for this suite.
Feb 12 14:40:48.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:40:48.098: INFO: namespace dns-3675 deletion completed in 6.13420552s

• [SLOW TEST:23.680 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:40:48.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:40:56.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1049" for this suite.
Feb 12 14:41:52.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:41:52.581: INFO: namespace kubelet-test-1049 deletion completed in 56.228717365s

• [SLOW TEST:64.483 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:41:52.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 12 14:41:52.704: INFO: Waiting up to 5m0s for pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf" in namespace "emptydir-1972" to be "success or failure"
Feb 12 14:41:52.708: INFO: Pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.583938ms
Feb 12 14:41:54.715: INFO: Pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011422176s
Feb 12 14:41:56.751: INFO: Pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047190749s
Feb 12 14:41:58.763: INFO: Pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058999563s
Feb 12 14:42:00.780: INFO: Pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075829103s
STEP: Saw pod success
Feb 12 14:42:00.780: INFO: Pod "pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf" satisfied condition "success or failure"
Feb 12 14:42:00.784: INFO: Trying to get logs from node iruya-node pod pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf container test-container: 
STEP: delete the pod
Feb 12 14:42:00.841: INFO: Waiting for pod pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf to disappear
Feb 12 14:42:00.846: INFO: Pod pod-7c7c4bbd-45f1-42c8-8436-ad1e354a5abf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:42:00.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1972" for this suite.
Feb 12 14:42:07.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:42:07.117: INFO: namespace emptydir-1972 deletion completed in 6.263761721s

• [SLOW TEST:14.536 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:42:07.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 12 14:42:07.248: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5288" to be "success or failure"
Feb 12 14:42:07.254: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.48198ms
Feb 12 14:42:09.262: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01376267s
Feb 12 14:42:11.271: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022412081s
Feb 12 14:42:13.285: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036170714s
Feb 12 14:42:15.295: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046861456s
Feb 12 14:42:17.303: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054246698s
Feb 12 14:42:19.313: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.064552118s
STEP: Saw pod success
Feb 12 14:42:19.313: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 12 14:42:19.318: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 12 14:42:19.482: INFO: Waiting for pod pod-host-path-test to disappear
Feb 12 14:42:19.495: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:42:19.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5288" for this suite.
Feb 12 14:42:25.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:42:25.798: INFO: namespace hostpath-5288 deletion completed in 6.295750164s

• [SLOW TEST:18.681 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:42:25.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 12 14:42:25.884: INFO: PodSpec: initContainers in spec.initContainers
Feb 12 14:43:31.902: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ad81a988-6c44-466e-86ff-8b20f510f2ad", GenerateName:"", Namespace:"init-container-7421", SelfLink:"/api/v1/namespaces/init-container-7421/pods/pod-init-ad81a988-6c44-466e-86ff-8b20f510f2ad", UID:"eb631af7-1922-4338-a0a1-0eb86ebf4a9d", ResourceVersion:"24085459", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717115345, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"884170641"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-s8q8q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a1e600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s8q8q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s8q8q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s8q8q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f5d648), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0032ed500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f5d6d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f5d6f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f5d6f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f5d6fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717115346, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717115346, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717115346, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717115345, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0031e7d80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f82f50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f82fc0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://b75b0feb368789ecc1b6c0195964d694bd08a1a4d3d689391be937752f4e1451"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0031e7dc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0031e7da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:43:31.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7421" for this suite.
Feb 12 14:43:53.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:43:54.062: INFO: namespace init-container-7421 deletion completed in 22.149410125s

• [SLOW TEST:88.263 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:43:54.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-70356283-9753-47ba-b182-73b96ccca7b3
STEP: Creating a pod to test consume secrets
Feb 12 14:43:54.154: INFO: Waiting up to 5m0s for pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee" in namespace "secrets-7790" to be "success or failure"
Feb 12 14:43:54.160: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127666ms
Feb 12 14:43:56.170: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016304302s
Feb 12 14:43:58.175: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02121597s
Feb 12 14:44:00.184: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030203804s
Feb 12 14:44:02.194: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040220874s
Feb 12 14:44:04.200: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045485592s
STEP: Saw pod success
Feb 12 14:44:04.200: INFO: Pod "pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee" satisfied condition "success or failure"
Feb 12 14:44:04.205: INFO: Trying to get logs from node iruya-node pod pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee container secret-volume-test: 
STEP: delete the pod
Feb 12 14:44:04.321: INFO: Waiting for pod pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee to disappear
Feb 12 14:44:04.327: INFO: Pod pod-secrets-a6c1a76a-35fc-4cc8-b559-f46650e7f6ee no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:44:04.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7790" for this suite.
Feb 12 14:44:10.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:44:10.544: INFO: namespace secrets-7790 deletion completed in 6.211058418s

• [SLOW TEST:16.482 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:44:10.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 12 14:44:10.736: INFO: Number of nodes with available pods: 0
Feb 12 14:44:10.737: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:11.768: INFO: Number of nodes with available pods: 0
Feb 12 14:44:11.768: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:12.754: INFO: Number of nodes with available pods: 0
Feb 12 14:44:12.754: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:13.759: INFO: Number of nodes with available pods: 0
Feb 12 14:44:13.760: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:14.763: INFO: Number of nodes with available pods: 0
Feb 12 14:44:14.763: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:17.584: INFO: Number of nodes with available pods: 0
Feb 12 14:44:17.584: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:17.757: INFO: Number of nodes with available pods: 0
Feb 12 14:44:17.757: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:18.753: INFO: Number of nodes with available pods: 0
Feb 12 14:44:18.753: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:19.757: INFO: Number of nodes with available pods: 0
Feb 12 14:44:19.757: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:20.755: INFO: Number of nodes with available pods: 0
Feb 12 14:44:20.755: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:21.758: INFO: Number of nodes with available pods: 2
Feb 12 14:44:21.758: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 12 14:44:21.799: INFO: Number of nodes with available pods: 1
Feb 12 14:44:21.799: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:22.810: INFO: Number of nodes with available pods: 1
Feb 12 14:44:22.811: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:24.063: INFO: Number of nodes with available pods: 1
Feb 12 14:44:24.063: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:24.817: INFO: Number of nodes with available pods: 1
Feb 12 14:44:24.817: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:25.818: INFO: Number of nodes with available pods: 1
Feb 12 14:44:25.818: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:26.810: INFO: Number of nodes with available pods: 1
Feb 12 14:44:26.810: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:27.825: INFO: Number of nodes with available pods: 1
Feb 12 14:44:27.825: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:28.820: INFO: Number of nodes with available pods: 1
Feb 12 14:44:28.820: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:29.841: INFO: Number of nodes with available pods: 1
Feb 12 14:44:29.841: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:30.823: INFO: Number of nodes with available pods: 1
Feb 12 14:44:30.823: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:32.456: INFO: Number of nodes with available pods: 1
Feb 12 14:44:32.456: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:32.833: INFO: Number of nodes with available pods: 1
Feb 12 14:44:32.833: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:33.809: INFO: Number of nodes with available pods: 1
Feb 12 14:44:33.809: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:34.811: INFO: Number of nodes with available pods: 1
Feb 12 14:44:34.811: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 12 14:44:35.815: INFO: Number of nodes with available pods: 2
Feb 12 14:44:35.815: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3239, will wait for the garbage collector to delete the pods
Feb 12 14:44:35.895: INFO: Deleting DaemonSet.extensions daemon-set took: 21.343793ms
Feb 12 14:44:36.196: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.783761ms
Feb 12 14:44:47.912: INFO: Number of nodes with available pods: 0
Feb 12 14:44:47.913: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 14:44:47.924: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3239/daemonsets","resourceVersion":"24085656"},"items":null}

Feb 12 14:44:47.930: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3239/pods","resourceVersion":"24085656"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:44:47.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3239" for this suite.
Feb 12 14:44:54.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:44:54.123: INFO: namespace daemonsets-3239 deletion completed in 6.120030209s

• [SLOW TEST:43.578 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:44:54.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 12 14:44:54.243: INFO: Number of nodes with available pods: 0
Feb 12 14:44:54.243: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:55.264: INFO: Number of nodes with available pods: 0
Feb 12 14:44:55.264: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:56.403: INFO: Number of nodes with available pods: 0
Feb 12 14:44:56.403: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:57.299: INFO: Number of nodes with available pods: 0
Feb 12 14:44:57.300: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:44:58.266: INFO: Number of nodes with available pods: 0
Feb 12 14:44:58.267: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:00.892: INFO: Number of nodes with available pods: 0
Feb 12 14:45:00.892: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:01.262: INFO: Number of nodes with available pods: 0
Feb 12 14:45:01.262: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:02.328: INFO: Number of nodes with available pods: 0
Feb 12 14:45:02.328: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:03.295: INFO: Number of nodes with available pods: 1
Feb 12 14:45:03.295: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:04.258: INFO: Number of nodes with available pods: 1
Feb 12 14:45:04.258: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:05.261: INFO: Number of nodes with available pods: 2
Feb 12 14:45:05.261: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 12 14:45:05.432: INFO: Number of nodes with available pods: 1
Feb 12 14:45:05.433: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:06.456: INFO: Number of nodes with available pods: 1
Feb 12 14:45:06.456: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:07.446: INFO: Number of nodes with available pods: 1
Feb 12 14:45:07.446: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:08.448: INFO: Number of nodes with available pods: 1
Feb 12 14:45:08.448: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:09.455: INFO: Number of nodes with available pods: 1
Feb 12 14:45:09.455: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:10.452: INFO: Number of nodes with available pods: 1
Feb 12 14:45:10.452: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:11.453: INFO: Number of nodes with available pods: 1
Feb 12 14:45:11.454: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:12.448: INFO: Number of nodes with available pods: 1
Feb 12 14:45:12.449: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:13.448: INFO: Number of nodes with available pods: 1
Feb 12 14:45:13.448: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:14.448: INFO: Number of nodes with available pods: 1
Feb 12 14:45:14.448: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:15.459: INFO: Number of nodes with available pods: 1
Feb 12 14:45:15.460: INFO: Node iruya-node is running more than one daemon pod
Feb 12 14:45:16.460: INFO: Number of nodes with available pods: 2
Feb 12 14:45:16.460: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3763, will wait for the garbage collector to delete the pods
Feb 12 14:45:16.551: INFO: Deleting DaemonSet.extensions daemon-set took: 23.939502ms
Feb 12 14:45:16.852: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.652738ms
Feb 12 14:45:27.963: INFO: Number of nodes with available pods: 0
Feb 12 14:45:27.963: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 14:45:27.968: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3763/daemonsets","resourceVersion":"24085787"},"items":null}

Feb 12 14:45:27.973: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3763/pods","resourceVersion":"24085787"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:45:27.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3763" for this suite.
Feb 12 14:45:34.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:45:34.150: INFO: namespace daemonsets-3763 deletion completed in 6.147234136s

• [SLOW TEST:40.027 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:45:34.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 12 14:45:34.222: INFO: Waiting up to 5m0s for pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de" in namespace "emptydir-8137" to be "success or failure"
Feb 12 14:45:34.232: INFO: Pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de": Phase="Pending", Reason="", readiness=false. Elapsed: 9.044708ms
Feb 12 14:45:36.237: INFO: Pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014415216s
Feb 12 14:45:38.246: INFO: Pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023274809s
Feb 12 14:45:40.257: INFO: Pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034676995s
Feb 12 14:45:42.272: INFO: Pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049164005s
STEP: Saw pod success
Feb 12 14:45:42.272: INFO: Pod "pod-dbf95518-35f0-4dba-b288-f126604bb2de" satisfied condition "success or failure"
Feb 12 14:45:42.279: INFO: Trying to get logs from node iruya-node pod pod-dbf95518-35f0-4dba-b288-f126604bb2de container test-container: 
STEP: delete the pod
Feb 12 14:45:42.351: INFO: Waiting for pod pod-dbf95518-35f0-4dba-b288-f126604bb2de to disappear
Feb 12 14:45:42.356: INFO: Pod pod-dbf95518-35f0-4dba-b288-f126604bb2de no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:45:42.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8137" for this suite.
Feb 12 14:45:48.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:45:48.511: INFO: namespace emptydir-8137 deletion completed in 6.14822766s

• [SLOW TEST:14.360 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:45:48.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 12 14:45:48.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2508 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 12 14:46:00.910: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0212 14:45:59.698881    3325 log.go:172] (0xc000700580) (0xc00063e280) Create stream\nI0212 14:45:59.699030    3325 log.go:172] (0xc000700580) (0xc00063e280) Stream added, broadcasting: 1\nI0212 14:45:59.708199    3325 log.go:172] (0xc000700580) Reply frame received for 1\nI0212 14:45:59.708260    3325 log.go:172] (0xc000700580) (0xc00039cd20) Create stream\nI0212 14:45:59.708270    3325 log.go:172] (0xc000700580) (0xc00039cd20) Stream added, broadcasting: 3\nI0212 14:45:59.710656    3325 log.go:172] (0xc000700580) Reply frame received for 3\nI0212 14:45:59.710702    3325 log.go:172] (0xc000700580) (0xc00039cdc0) Create stream\nI0212 14:45:59.710722    3325 log.go:172] (0xc000700580) (0xc00039cdc0) Stream added, broadcasting: 5\nI0212 14:45:59.712584    3325 log.go:172] (0xc000700580) Reply frame received for 5\nI0212 14:45:59.712622    3325 log.go:172] (0xc000700580) (0xc000a8a000) Create stream\nI0212 14:45:59.712633    3325 log.go:172] (0xc000700580) (0xc000a8a000) Stream added, broadcasting: 7\nI0212 14:45:59.716201    3325 log.go:172] (0xc000700580) Reply frame received for 7\nI0212 14:45:59.716625    3325 log.go:172] (0xc00039cd20) (3) Writing data frame\nI0212 14:45:59.716906    3325 log.go:172] (0xc00039cd20) (3) Writing data frame\nI0212 14:45:59.724554    3325 log.go:172] (0xc000700580) Data frame received for 5\nI0212 14:45:59.724586    3325 log.go:172] (0xc00039cdc0) (5) Data frame handling\nI0212 14:45:59.724617    3325 log.go:172] (0xc00039cdc0) (5) Data frame sent\nI0212 14:45:59.729945    3325 log.go:172] (0xc000700580) Data frame received for 5\nI0212 14:45:59.730041    3325 log.go:172] (0xc00039cdc0) (5) Data frame handling\nI0212 14:45:59.730060    3325 log.go:172] (0xc00039cdc0) (5) Data frame sent\nI0212 14:46:00.853054    3325 log.go:172] (0xc000700580) Data frame received for 1\nI0212 14:46:00.853369    3325 log.go:172] (0xc000700580) (0xc00039cdc0) Stream removed, broadcasting: 5\nI0212 14:46:00.853566    3325 log.go:172] (0xc00063e280) (1) Data frame handling\nI0212 14:46:00.853643    3325 log.go:172] (0xc00063e280) (1) Data frame sent\nI0212 14:46:00.854168    3325 log.go:172] (0xc000700580) (0xc00039cd20) Stream removed, broadcasting: 3\nI0212 14:46:00.854232    3325 log.go:172] (0xc000700580) (0xc00063e280) Stream removed, broadcasting: 1\nI0212 14:46:00.855696    3325 log.go:172] (0xc000700580) (0xc000a8a000) Stream removed, broadcasting: 7\nI0212 14:46:00.855775    3325 log.go:172] (0xc000700580) (0xc00063e280) Stream removed, broadcasting: 1\nI0212 14:46:00.855794    3325 log.go:172] (0xc000700580) (0xc00039cd20) Stream removed, broadcasting: 3\nI0212 14:46:00.855805    3325 log.go:172] (0xc000700580) (0xc00039cdc0) Stream removed, broadcasting: 5\nI0212 14:46:00.855820    3325 log.go:172] (0xc000700580) (0xc000a8a000) Stream removed, broadcasting: 7\n"
Feb 12 14:46:00.910: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:46:02.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2508" for this suite.
Feb 12 14:46:09.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:46:09.145: INFO: namespace kubectl-2508 deletion completed in 6.21375963s

• [SLOW TEST:20.633 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:46:09.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-22920c1e-12ee-4eb2-8f79-c6c396fb2b9d
STEP: Creating a pod to test consume configMaps
Feb 12 14:46:09.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc" in namespace "configmap-3835" to be "success or failure"
Feb 12 14:46:09.242: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.058468ms
Feb 12 14:46:11.250: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014784998s
Feb 12 14:46:13.260: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024783534s
Feb 12 14:46:15.273: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037669545s
Feb 12 14:46:17.282: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046431728s
Feb 12 14:46:19.289: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053439964s
STEP: Saw pod success
Feb 12 14:46:19.289: INFO: Pod "pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc" satisfied condition "success or failure"
Feb 12 14:46:19.294: INFO: Trying to get logs from node iruya-node pod pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc container configmap-volume-test: 
STEP: delete the pod
Feb 12 14:46:19.399: INFO: Waiting for pod pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc to disappear
Feb 12 14:46:19.406: INFO: Pod pod-configmaps-aae12405-8668-4d56-8613-3fc13133bcfc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:46:19.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3835" for this suite.
Feb 12 14:46:25.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:46:25.607: INFO: namespace configmap-3835 deletion completed in 6.194275589s

• [SLOW TEST:16.460 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:46:25.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-1c8415ef-c6ad-4e56-a4e0-f5082ab8bc74 in namespace container-probe-4105
Feb 12 14:46:35.811: INFO: Started pod busybox-1c8415ef-c6ad-4e56-a4e0-f5082ab8bc74 in namespace container-probe-4105
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 14:46:35.816: INFO: Initial restart count of pod busybox-1c8415ef-c6ad-4e56-a4e0-f5082ab8bc74 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:50:37.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4105" for this suite.
Feb 12 14:50:43.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:50:43.685: INFO: namespace container-probe-4105 deletion completed in 6.212894903s

• [SLOW TEST:258.078 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:50:43.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:50:53.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6584" for this suite.
Feb 12 14:51:38.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:51:38.110: INFO: namespace kubelet-test-6584 deletion completed in 44.169520829s

• [SLOW TEST:54.424 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:51:38.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:51:38.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-293" for this suite.
Feb 12 14:51:44.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:51:44.604: INFO: namespace kubelet-test-293 deletion completed in 6.272079785s

• [SLOW TEST:6.493 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:51:44.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 12 14:51:51.312: INFO: 0 pods remaining
Feb 12 14:51:51.312: INFO: 0 pods has nil DeletionTimestamp
Feb 12 14:51:51.312: INFO: 
STEP: Gathering metrics
W0212 14:51:53.073038       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 14:51:53.073: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:51:53.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9894" for this suite.
Feb 12 14:52:05.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:52:05.224: INFO: namespace gc-9894 deletion completed in 12.138934817s

• [SLOW TEST:20.620 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:52:05.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 14:52:05.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3622'
Feb 12 14:52:05.494: INFO: stderr: ""
Feb 12 14:52:05.494: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 12 14:52:15.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3622 -o json'
Feb 12 14:52:15.705: INFO: stderr: ""
Feb 12 14:52:15.705: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-12T14:52:05Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-3622\",\n        \"resourceVersion\": \"24086604\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3622/pods/e2e-test-nginx-pod\",\n        \"uid\": \"358544f7-9f21-42b9-966b-87c4da5bd203\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-sh6ck\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-sh6ck\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-sh6ck\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T14:52:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T14:52:13Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T14:52:13Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T14:52:05Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b006bfcb56bfd27d5bd032d804c137b9c97eddb557a0d1eaf04e4655cec370e4\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-12T14:52:13Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-12T14:52:05Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 12 14:52:15.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3622'
Feb 12 14:52:16.192: INFO: stderr: ""
Feb 12 14:52:16.192: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 12 14:52:16.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3622'
Feb 12 14:52:22.744: INFO: stderr: ""
Feb 12 14:52:22.744: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:52:22.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3622" for this suite.
Feb 12 14:52:28.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:52:28.959: INFO: namespace kubectl-3622 deletion completed in 6.183476497s

• [SLOW TEST:23.735 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:52:28.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-kpxb
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 14:52:29.093: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kpxb" in namespace "subpath-3975" to be "success or failure"
Feb 12 14:52:29.099: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.578982ms
Feb 12 14:52:31.111: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018326651s
Feb 12 14:52:33.124: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030689901s
Feb 12 14:52:35.135: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042423138s
Feb 12 14:52:37.144: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051205635s
Feb 12 14:52:39.160: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 10.066882723s
Feb 12 14:52:41.169: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 12.075560535s
Feb 12 14:52:43.180: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 14.087144355s
Feb 12 14:52:45.191: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 16.098233141s
Feb 12 14:52:47.201: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 18.108268047s
Feb 12 14:52:49.213: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 20.120074657s
Feb 12 14:52:51.224: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 22.13071049s
Feb 12 14:52:53.251: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 24.15824011s
Feb 12 14:52:55.261: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 26.168412719s
Feb 12 14:52:57.273: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Running", Reason="", readiness=true. Elapsed: 28.180210233s
Feb 12 14:52:59.279: INFO: Pod "pod-subpath-test-downwardapi-kpxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.185574174s
STEP: Saw pod success
Feb 12 14:52:59.279: INFO: Pod "pod-subpath-test-downwardapi-kpxb" satisfied condition "success or failure"
Feb 12 14:52:59.282: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-kpxb container test-container-subpath-downwardapi-kpxb: 
STEP: delete the pod
Feb 12 14:52:59.426: INFO: Waiting for pod pod-subpath-test-downwardapi-kpxb to disappear
Feb 12 14:52:59.429: INFO: Pod pod-subpath-test-downwardapi-kpxb no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-kpxb
Feb 12 14:52:59.429: INFO: Deleting pod "pod-subpath-test-downwardapi-kpxb" in namespace "subpath-3975"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:52:59.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3975" for this suite.
Feb 12 14:53:05.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:53:05.613: INFO: namespace subpath-3975 deletion completed in 6.176777992s

• [SLOW TEST:36.654 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:53:05.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 12 14:53:05.755: INFO: Waiting up to 5m0s for pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0" in namespace "emptydir-8084" to be "success or failure"
Feb 12 14:53:05.773: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.197124ms
Feb 12 14:53:07.787: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031734279s
Feb 12 14:53:09.868: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112918387s
Feb 12 14:53:11.877: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121926302s
Feb 12 14:53:13.890: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134034065s
Feb 12 14:53:15.912: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156611884s
STEP: Saw pod success
Feb 12 14:53:15.912: INFO: Pod "pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0" satisfied condition "success or failure"
Feb 12 14:53:15.918: INFO: Trying to get logs from node iruya-node pod pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0 container test-container: 
STEP: delete the pod
Feb 12 14:53:15.980: INFO: Waiting for pod pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0 to disappear
Feb 12 14:53:16.057: INFO: Pod pod-1c1cc6c7-c7bc-4e5b-a66e-06023e3e7ac0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:53:16.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8084" for this suite.
Feb 12 14:53:22.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:53:22.185: INFO: namespace emptydir-8084 deletion completed in 6.120149922s

• [SLOW TEST:16.572 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:53:22.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 12 14:53:33.416: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:53:34.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3000" for this suite.
Feb 12 14:53:58.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:53:58.673: INFO: namespace replicaset-3000 deletion completed in 24.176134782s

• [SLOW TEST:36.487 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:53:58.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 12 14:54:10.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-db563bd4-55cf-462e-b3a8-a5b875dc8db5 -c busybox-main-container --namespace=emptydir-3735 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 12 14:54:11.436: INFO: stderr: "I0212 14:54:11.136242    3419 log.go:172] (0xc000116f20) (0xc000646aa0) Create stream\nI0212 14:54:11.136460    3419 log.go:172] (0xc000116f20) (0xc000646aa0) Stream added, broadcasting: 1\nI0212 14:54:11.144192    3419 log.go:172] (0xc000116f20) Reply frame received for 1\nI0212 14:54:11.144318    3419 log.go:172] (0xc000116f20) (0xc0008d6000) Create stream\nI0212 14:54:11.144332    3419 log.go:172] (0xc000116f20) (0xc0008d6000) Stream added, broadcasting: 3\nI0212 14:54:11.147119    3419 log.go:172] (0xc000116f20) Reply frame received for 3\nI0212 14:54:11.147142    3419 log.go:172] (0xc000116f20) (0xc000646b40) Create stream\nI0212 14:54:11.147156    3419 log.go:172] (0xc000116f20) (0xc000646b40) Stream added, broadcasting: 5\nI0212 14:54:11.148557    3419 log.go:172] (0xc000116f20) Reply frame received for 5\nI0212 14:54:11.247179    3419 log.go:172] (0xc000116f20) Data frame received for 3\nI0212 14:54:11.247347    3419 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0212 14:54:11.247395    3419 log.go:172] (0xc0008d6000) (3) Data frame sent\nI0212 14:54:11.424705    3419 log.go:172] (0xc000116f20) Data frame received for 1\nI0212 14:54:11.424904    3419 log.go:172] (0xc000116f20) (0xc0008d6000) Stream removed, broadcasting: 3\nI0212 14:54:11.425017    3419 log.go:172] (0xc000646aa0) (1) Data frame handling\nI0212 14:54:11.425039    3419 log.go:172] (0xc000646aa0) (1) Data frame sent\nI0212 14:54:11.425081    3419 log.go:172] (0xc000116f20) (0xc000646b40) Stream removed, broadcasting: 5\nI0212 14:54:11.425144    3419 log.go:172] (0xc000116f20) (0xc000646aa0) Stream removed, broadcasting: 1\nI0212 14:54:11.425213    3419 log.go:172] (0xc000116f20) Go away received\nI0212 14:54:11.426915    3419 log.go:172] (0xc000116f20) (0xc000646aa0) Stream removed, broadcasting: 1\nI0212 14:54:11.426940    3419 log.go:172] (0xc000116f20) (0xc0008d6000) Stream removed, broadcasting: 3\nI0212 14:54:11.426949    3419 log.go:172] (0xc000116f20) (0xc000646b40) Stream removed, broadcasting: 5\n"
Feb 12 14:54:11.437: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:54:11.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3735" for this suite.
Feb 12 14:54:17.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:54:17.670: INFO: namespace emptydir-3735 deletion completed in 6.224193937s

• [SLOW TEST:18.996 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:54:17.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 12 14:54:17.861: INFO: Waiting up to 5m0s for pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f" in namespace "emptydir-7011" to be "success or failure"
Feb 12 14:54:17.871: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.277621ms
Feb 12 14:54:19.884: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022939055s
Feb 12 14:54:21.892: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030873458s
Feb 12 14:54:23.904: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042599444s
Feb 12 14:54:25.912: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0513108s
Feb 12 14:54:27.924: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062534112s
Feb 12 14:54:29.945: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083798634s
STEP: Saw pod success
Feb 12 14:54:29.945: INFO: Pod "pod-8096aef5-fa5f-4a53-895b-8904746a096f" satisfied condition "success or failure"
Feb 12 14:54:29.952: INFO: Trying to get logs from node iruya-node pod pod-8096aef5-fa5f-4a53-895b-8904746a096f container test-container: 
STEP: delete the pod
Feb 12 14:54:30.323: INFO: Waiting for pod pod-8096aef5-fa5f-4a53-895b-8904746a096f to disappear
Feb 12 14:54:30.332: INFO: Pod pod-8096aef5-fa5f-4a53-895b-8904746a096f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:54:30.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7011" for this suite.
Feb 12 14:54:36.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:54:36.505: INFO: namespace emptydir-7011 deletion completed in 6.164578099s

• [SLOW TEST:18.835 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:54:36.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-457
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 12 14:54:36.728: INFO: Found 0 stateful pods, waiting for 3
Feb 12 14:54:46.748: INFO: Found 2 stateful pods, waiting for 3
Feb 12 14:54:56.741: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:54:56.741: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:54:56.741: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 14:55:06.752: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:55:06.752: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:55:06.752: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 12 14:55:06.787: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 12 14:55:16.893: INFO: Updating stateful set ss2
Feb 12 14:55:16.989: INFO: Waiting for Pod statefulset-457/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 12 14:55:27.420: INFO: Found 2 stateful pods, waiting for 3
Feb 12 14:55:37.429: INFO: Found 2 stateful pods, waiting for 3
Feb 12 14:55:47.431: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:55:47.431: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 14:55:47.431: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 12 14:55:47.504: INFO: Updating stateful set ss2
Feb 12 14:55:47.568: INFO: Waiting for Pod statefulset-457/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:55:57.639: INFO: Waiting for Pod statefulset-457/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:56:07.611: INFO: Updating stateful set ss2
Feb 12 14:56:07.665: INFO: Waiting for StatefulSet statefulset-457/ss2 to complete update
Feb 12 14:56:07.665: INFO: Waiting for Pod statefulset-457/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 14:56:17.680: INFO: Waiting for StatefulSet statefulset-457/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 12 14:56:27.689: INFO: Deleting all statefulset in ns statefulset-457
Feb 12 14:56:27.693: INFO: Scaling statefulset ss2 to 0
Feb 12 14:57:07.745: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 14:57:07.750: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:57:07.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-457" for this suite.
Feb 12 14:57:15.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:57:16.020: INFO: namespace statefulset-457 deletion completed in 8.18469791s

• [SLOW TEST:159.515 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:57:16.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 12 14:57:24.790: INFO: Successfully updated pod "pod-update-activedeadlineseconds-02ffa07a-0143-497c-b25f-9c9da4dd69a5"
Feb 12 14:57:24.790: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-02ffa07a-0143-497c-b25f-9c9da4dd69a5" in namespace "pods-9125" to be "terminated due to deadline exceeded"
Feb 12 14:57:24.941: INFO: Pod "pod-update-activedeadlineseconds-02ffa07a-0143-497c-b25f-9c9da4dd69a5": Phase="Running", Reason="", readiness=true. Elapsed: 150.596127ms
Feb 12 14:57:26.949: INFO: Pod "pod-update-activedeadlineseconds-02ffa07a-0143-497c-b25f-9c9da4dd69a5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.158918323s
Feb 12 14:57:26.949: INFO: Pod "pod-update-activedeadlineseconds-02ffa07a-0143-497c-b25f-9c9da4dd69a5" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:57:26.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9125" for this suite.
Feb 12 14:57:32.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:57:33.122: INFO: namespace pods-9125 deletion completed in 6.165106811s

• [SLOW TEST:17.101 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:57:33.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 14:57:33.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9" in namespace "projected-7669" to be "success or failure"
Feb 12 14:57:33.311: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.898055ms
Feb 12 14:57:35.335: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0361543s
Feb 12 14:57:37.347: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048622838s
Feb 12 14:57:39.368: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069327246s
Feb 12 14:57:41.376: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077047595s
Feb 12 14:57:43.382: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082923967s
STEP: Saw pod success
Feb 12 14:57:43.382: INFO: Pod "downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9" satisfied condition "success or failure"
Feb 12 14:57:43.384: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9 container client-container: 
STEP: delete the pod
Feb 12 14:57:43.505: INFO: Waiting for pod downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9 to disappear
Feb 12 14:57:43.512: INFO: Pod downwardapi-volume-a7e5f854-9767-41b4-877d-df1faa1529d9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:57:43.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7669" for this suite.
Feb 12 14:57:49.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:57:49.660: INFO: namespace projected-7669 deletion completed in 6.138136912s

• [SLOW TEST:16.537 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:57:49.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 14:57:49.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5" in namespace "downward-api-7691" to be "success or failure"
Feb 12 14:57:49.856: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.72605ms
Feb 12 14:57:51.987: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178224626s
Feb 12 14:57:54.010: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201486885s
Feb 12 14:57:56.025: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215900214s
Feb 12 14:57:58.031: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222216128s
Feb 12 14:58:00.054: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.245463908s
Feb 12 14:58:02.081: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.272180897s
STEP: Saw pod success
Feb 12 14:58:02.081: INFO: Pod "downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5" satisfied condition "success or failure"
Feb 12 14:58:02.086: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5 container client-container: 
STEP: delete the pod
Feb 12 14:58:02.324: INFO: Waiting for pod downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5 to disappear
Feb 12 14:58:02.350: INFO: Pod downwardapi-volume-08e22e75-303c-47eb-a0db-65b2528b64a5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:58:02.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7691" for this suite.
Feb 12 14:58:08.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:58:08.554: INFO: namespace downward-api-7691 deletion completed in 6.183487824s

• [SLOW TEST:18.894 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:58:08.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 14:58:08.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2993'
Feb 12 14:58:10.686: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 14:58:10.686: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 12 14:58:12.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2993'
Feb 12 14:58:12.993: INFO: stderr: ""
Feb 12 14:58:12.993: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:58:12.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2993" for this suite.
Feb 12 14:58:19.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:58:19.208: INFO: namespace kubectl-2993 deletion completed in 6.140674107s

• [SLOW TEST:10.653 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:58:19.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 14:58:19.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31" in namespace "projected-7212" to be "success or failure"
Feb 12 14:58:19.293: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423979ms
Feb 12 14:58:21.301: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018385104s
Feb 12 14:58:23.311: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028871477s
Feb 12 14:58:25.320: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037135291s
Feb 12 14:58:27.330: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047093789s
Feb 12 14:58:29.339: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056776307s
STEP: Saw pod success
Feb 12 14:58:29.340: INFO: Pod "downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31" satisfied condition "success or failure"
Feb 12 14:58:29.348: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31 container client-container: 
STEP: delete the pod
Feb 12 14:58:29.657: INFO: Waiting for pod downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31 to disappear
Feb 12 14:58:29.666: INFO: Pod downwardapi-volume-df504d9e-6aa9-4fcc-8f9c-4fb66ef52c31 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:58:29.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7212" for this suite.
Feb 12 14:58:35.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:58:35.863: INFO: namespace projected-7212 deletion completed in 6.188253454s

• [SLOW TEST:16.655 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:58:35.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-6702
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6702 to expose endpoints map[]
Feb 12 14:58:36.019: INFO: Get endpoints failed (5.718502ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 12 14:58:37.028: INFO: successfully validated that service endpoint-test2 in namespace services-6702 exposes endpoints map[] (1.014540982s elapsed)
STEP: Creating pod pod1 in namespace services-6702
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6702 to expose endpoints map[pod1:[80]]
Feb 12 14:58:41.184: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.140111149s elapsed, will retry)
Feb 12 14:58:45.242: INFO: successfully validated that service endpoint-test2 in namespace services-6702 exposes endpoints map[pod1:[80]] (8.198932226s elapsed)
STEP: Creating pod pod2 in namespace services-6702
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6702 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 12 14:58:51.271: INFO: Unexpected endpoints: found map[39729215-43f2-470d-bd1e-d69d76ca0b80:[80]], expected map[pod1:[80] pod2:[80]] (6.01954476s elapsed, will retry)
Feb 12 14:58:54.617: INFO: successfully validated that service endpoint-test2 in namespace services-6702 exposes endpoints map[pod1:[80] pod2:[80]] (9.365663002s elapsed)
STEP: Deleting pod pod1 in namespace services-6702
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6702 to expose endpoints map[pod2:[80]]
Feb 12 14:58:55.747: INFO: successfully validated that service endpoint-test2 in namespace services-6702 exposes endpoints map[pod2:[80]] (1.122383107s elapsed)
STEP: Deleting pod pod2 in namespace services-6702
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6702 to expose endpoints map[]
Feb 12 14:58:57.019: INFO: successfully validated that service endpoint-test2 in namespace services-6702 exposes endpoints map[] (1.246857038s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:58:58.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6702" for this suite.
Feb 12 14:59:20.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:59:20.522: INFO: namespace services-6702 deletion completed in 22.180886596s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:44.659 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:59:20.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-ttcb
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 14:59:20.722: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ttcb" in namespace "subpath-360" to be "success or failure"
Feb 12 14:59:20.749: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.230068ms
Feb 12 14:59:22.759: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037042195s
Feb 12 14:59:24.768: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046115686s
Feb 12 14:59:26.776: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053432849s
Feb 12 14:59:28.789: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066775641s
Feb 12 14:59:30.803: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 10.080704978s
Feb 12 14:59:32.814: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 12.09179408s
Feb 12 14:59:34.826: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 14.10339279s
Feb 12 14:59:36.837: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 16.114265911s
Feb 12 14:59:38.849: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 18.126272408s
Feb 12 14:59:40.860: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 20.1380606s
Feb 12 14:59:42.876: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 22.153258638s
Feb 12 14:59:44.895: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 24.172311535s
Feb 12 14:59:46.907: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 26.184942253s
Feb 12 14:59:48.921: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 28.198310454s
Feb 12 14:59:50.931: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Running", Reason="", readiness=true. Elapsed: 30.208744439s
Feb 12 14:59:52.940: INFO: Pod "pod-subpath-test-configmap-ttcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.21717451s
STEP: Saw pod success
Feb 12 14:59:52.940: INFO: Pod "pod-subpath-test-configmap-ttcb" satisfied condition "success or failure"
Feb 12 14:59:52.944: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-ttcb container test-container-subpath-configmap-ttcb: 
STEP: delete the pod
Feb 12 14:59:53.024: INFO: Waiting for pod pod-subpath-test-configmap-ttcb to disappear
Feb 12 14:59:53.034: INFO: Pod pod-subpath-test-configmap-ttcb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ttcb
Feb 12 14:59:53.034: INFO: Deleting pod "pod-subpath-test-configmap-ttcb" in namespace "subpath-360"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 14:59:53.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-360" for this suite.
Feb 12 14:59:59.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 14:59:59.213: INFO: namespace subpath-360 deletion completed in 6.133103116s

• [SLOW TEST:38.690 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 14:59:59.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 14:59:59.346: INFO: Creating deployment "test-recreate-deployment"
Feb 12 14:59:59.353: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 12 14:59:59.402: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 12 15:00:01.416: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 12 15:00:01.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 15:00:03.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 15:00:05.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 15:00:07.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717116399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 15:00:09.428: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 12 15:00:09.445: INFO: Updating deployment test-recreate-deployment
Feb 12 15:00:09.445: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 12 15:00:09.830: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-625,SelfLink:/apis/apps/v1/namespaces/deployment-625/deployments/test-recreate-deployment,UID:3425eba9-7636-4dba-9e48-345decc65b9f,ResourceVersion:24087904,Generation:2,CreationTimestamp:2020-02-12 14:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-12 15:00:09 +0000 UTC 2020-02-12 15:00:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-12 15:00:09 +0000 UTC 2020-02-12 14:59:59 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 12 15:00:09.849: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-625,SelfLink:/apis/apps/v1/namespaces/deployment-625/replicasets/test-recreate-deployment-5c8c9cc69d,UID:1a504257-d66b-44f8-bc6d-9e88e1172cf6,ResourceVersion:24087900,Generation:1,CreationTimestamp:2020-02-12 15:00:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3425eba9-7636-4dba-9e48-345decc65b9f 0xc00330cc77 0xc00330cc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 15:00:09.849: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 12 15:00:09.849: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-625,SelfLink:/apis/apps/v1/namespaces/deployment-625/replicasets/test-recreate-deployment-6df85df6b9,UID:19bd2a8c-e1d3-4b7f-ad26-875d96d19826,ResourceVersion:24087893,Generation:2,CreationTimestamp:2020-02-12 14:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3425eba9-7636-4dba-9e48-345decc65b9f 0xc00330cd47 0xc00330cd48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 15:00:09.857: INFO: Pod "test-recreate-deployment-5c8c9cc69d-vwhss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-vwhss,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-625,SelfLink:/api/v1/namespaces/deployment-625/pods/test-recreate-deployment-5c8c9cc69d-vwhss,UID:913f8be9-78c3-4610-89d9-4228e4d810b2,ResourceVersion:24087905,Generation:0,CreationTimestamp:2020-02-12 15:00:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 1a504257-d66b-44f8-bc6d-9e88e1172cf6 0xc00330d607 0xc00330d608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2m4js {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2m4js,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2m4js true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00330d680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00330d6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:00:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:00:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:00:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:00:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-12 15:00:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:00:09.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-625" for this suite.
Feb 12 15:00:15.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:00:16.054: INFO: namespace deployment-625 deletion completed in 6.181245214s

• [SLOW TEST:16.841 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:00:16.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 12 15:00:26.792: INFO: Successfully updated pod "annotationupdate942e3155-5fb5-4eac-ab26-8518b6031503"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:00:28.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5959" for this suite.
Feb 12 15:00:51.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:00:51.491: INFO: namespace downward-api-5959 deletion completed in 22.593378239s

• [SLOW TEST:35.436 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:00:51.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 12 15:00:51.559: INFO: namespace kubectl-4003
Feb 12 15:00:51.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4003'
Feb 12 15:00:51.911: INFO: stderr: ""
Feb 12 15:00:51.911: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 12 15:00:52.923: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:52.923: INFO: Found 0 / 1
Feb 12 15:00:53.927: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:53.927: INFO: Found 0 / 1
Feb 12 15:00:54.920: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:54.920: INFO: Found 0 / 1
Feb 12 15:00:55.923: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:55.923: INFO: Found 0 / 1
Feb 12 15:00:56.977: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:56.977: INFO: Found 0 / 1
Feb 12 15:00:57.919: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:57.919: INFO: Found 0 / 1
Feb 12 15:00:58.920: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:58.920: INFO: Found 0 / 1
Feb 12 15:00:59.941: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:00:59.941: INFO: Found 0 / 1
Feb 12 15:01:00.919: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:01:00.919: INFO: Found 1 / 1
Feb 12 15:01:00.919: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 12 15:01:00.924: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:01:00.924: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 12 15:01:00.924: INFO: wait on redis-master startup in kubectl-4003 
Feb 12 15:01:00.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ckr2x redis-master --namespace=kubectl-4003'
Feb 12 15:01:01.148: INFO: stderr: ""
Feb 12 15:01:01.148: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 12 Feb 15:01:00.334 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 15:01:00.334 # Server started, Redis version 3.2.12\n1:M 12 Feb 15:01:00.334 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 15:01:00.334 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 12 15:01:01.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4003'
Feb 12 15:01:01.470: INFO: stderr: ""
Feb 12 15:01:01.470: INFO: stdout: "service/rm2 exposed\n"
Feb 12 15:01:01.476: INFO: Service rm2 in namespace kubectl-4003 found.
STEP: exposing service
Feb 12 15:01:03.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4003'
Feb 12 15:01:03.772: INFO: stderr: ""
Feb 12 15:01:03.772: INFO: stdout: "service/rm3 exposed\n"
Feb 12 15:01:03.802: INFO: Service rm3 in namespace kubectl-4003 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:01:08.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4003" for this suite.
Feb 12 15:01:30.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:01:30.889: INFO: namespace kubectl-4003 deletion completed in 22.249104836s

• [SLOW TEST:39.398 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:01:30.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 15:01:30.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2392'
Feb 12 15:01:31.119: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 15:01:31.120: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 12 15:01:31.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2392'
Feb 12 15:01:31.393: INFO: stderr: ""
Feb 12 15:01:31.393: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:01:31.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2392" for this suite.
Feb 12 15:01:37.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:01:37.553: INFO: namespace kubectl-2392 deletion completed in 6.152521675s

• [SLOW TEST:6.663 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:01:37.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 15:01:37.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992" in namespace "downward-api-565" to be "success or failure"
Feb 12 15:01:37.688: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Pending", Reason="", readiness=false. Elapsed: 11.344685ms
Feb 12 15:01:39.697: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020961698s
Feb 12 15:01:41.731: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054423496s
Feb 12 15:01:43.793: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116540463s
Feb 12 15:01:45.858: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181311149s
Feb 12 15:01:47.877: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200705628s
Feb 12 15:01:49.893: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.216650167s
STEP: Saw pod success
Feb 12 15:01:49.893: INFO: Pod "downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992" satisfied condition "success or failure"
Feb 12 15:01:49.900: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992 container client-container: 
STEP: delete the pod
Feb 12 15:01:50.169: INFO: Waiting for pod downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992 to disappear
Feb 12 15:01:50.175: INFO: Pod downwardapi-volume-269773a1-524a-461d-8fc9-17fcc3e38992 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:01:50.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-565" for this suite.
Feb 12 15:01:56.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:01:56.413: INFO: namespace downward-api-565 deletion completed in 6.182167233s

• [SLOW TEST:18.860 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:01:56.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-ea425334-f966-45d6-a57c-c2d43fd38ac5
STEP: Creating a pod to test consume configMaps
Feb 12 15:01:56.604: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b" in namespace "projected-3549" to be "success or failure"
Feb 12 15:01:56.612: INFO: Pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.099238ms
Feb 12 15:01:58.637: INFO: Pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032399016s
Feb 12 15:02:00.643: INFO: Pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038886952s
Feb 12 15:02:02.660: INFO: Pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055682171s
Feb 12 15:02:04.687: INFO: Pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082988368s
STEP: Saw pod success
Feb 12 15:02:04.688: INFO: Pod "pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b" satisfied condition "success or failure"
Feb 12 15:02:04.693: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 15:02:04.732: INFO: Waiting for pod pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b to disappear
Feb 12 15:02:04.792: INFO: Pod pod-projected-configmaps-92648a6f-a299-42fa-8dc4-2d60602a159b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:02:04.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3549" for this suite.
Feb 12 15:02:10.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:02:10.943: INFO: namespace projected-3549 deletion completed in 6.144199782s

• [SLOW TEST:14.529 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:02:10.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 15:02:11.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679" in namespace "downward-api-9522" to be "success or failure"
Feb 12 15:02:11.034: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090107ms
Feb 12 15:02:13.053: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021505628s
Feb 12 15:02:15.065: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03369755s
Feb 12 15:02:17.075: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043368269s
Feb 12 15:02:19.087: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055852287s
Feb 12 15:02:21.094: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063139271s
STEP: Saw pod success
Feb 12 15:02:21.095: INFO: Pod "downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679" satisfied condition "success or failure"
Feb 12 15:02:21.099: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679 container client-container: 
STEP: delete the pod
Feb 12 15:02:21.329: INFO: Waiting for pod downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679 to disappear
Feb 12 15:02:21.337: INFO: Pod downwardapi-volume-963e91e7-8eae-4d3e-b90a-c9942bc58679 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:02:21.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9522" for this suite.
Feb 12 15:02:27.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:02:27.526: INFO: namespace downward-api-9522 deletion completed in 6.178744585s

• [SLOW TEST:16.582 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:02:27.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 15:02:27.613: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 12 15:02:32.626: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 15:02:36.656: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 12 15:02:36.708: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2903,SelfLink:/apis/apps/v1/namespaces/deployment-2903/deployments/test-cleanup-deployment,UID:5bf7b108-f4ab-48de-bb26-cc32e644ef70,ResourceVersion:24088300,Generation:1,CreationTimestamp:2020-02-12 15:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 12 15:02:36.721: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2903,SelfLink:/apis/apps/v1/namespaces/deployment-2903/replicasets/test-cleanup-deployment-55bbcbc84c,UID:7e47b398-4172-4b71-bcb0-25a055a6a419,ResourceVersion:24088302,Generation:1,CreationTimestamp:2020-02-12 15:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 5bf7b108-f4ab-48de-bb26-cc32e644ef70 0xc0024907d7 0xc0024907d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 15:02:36.721: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 12 15:02:36.721: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2903,SelfLink:/apis/apps/v1/namespaces/deployment-2903/replicasets/test-cleanup-controller,UID:b3113f4f-e378-4c4c-bc96-0ac3355b48a1,ResourceVersion:24088301,Generation:1,CreationTimestamp:2020-02-12 15:02:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 5bf7b108-f4ab-48de-bb26-cc32e644ef70 0xc002490707 0xc002490708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 12 15:02:36.781: INFO: Pod "test-cleanup-controller-gf6zl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-gf6zl,GenerateName:test-cleanup-controller-,Namespace:deployment-2903,SelfLink:/api/v1/namespaces/deployment-2903/pods/test-cleanup-controller-gf6zl,UID:5576b156-dbc9-43b4-84a8-0bb889b852db,ResourceVersion:24088297,Generation:0,CreationTimestamp:2020-02-12 15:02:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller b3113f4f-e378-4c4c-bc96-0ac3355b48a1 0xc002e74d77 0xc002e74d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x5v27 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x5v27,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-x5v27 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e74f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e74f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:02:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:02:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:02:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:02:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-12 15:02:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 15:02:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b33e6266615b770e83fd27400633a7dd392e4345291c1ba80a556ce69773f102}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 15:02:36.782: INFO: Pod "test-cleanup-deployment-55bbcbc84c-jq5mp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-jq5mp,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2903,SelfLink:/api/v1/namespaces/deployment-2903/pods/test-cleanup-deployment-55bbcbc84c-jq5mp,UID:976e8836-02dc-4283-816e-4552f26af538,ResourceVersion:24088308,Generation:0,CreationTimestamp:2020-02-12 15:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 7e47b398-4172-4b71-bcb0-25a055a6a419 0xc002e75117 0xc002e75118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x5v27 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x5v27,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-x5v27 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e75190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e751b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 15:02:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:02:36.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2903" for this suite.
Feb 12 15:02:45.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:02:45.304: INFO: namespace deployment-2903 deletion completed in 8.372792047s

• [SLOW TEST:17.778 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:02:45.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-ec27e593-e50d-4a32-b62a-0ac2159dc5ed in namespace container-probe-8837
Feb 12 15:02:57.508: INFO: Started pod busybox-ec27e593-e50d-4a32-b62a-0ac2159dc5ed in namespace container-probe-8837
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 15:02:57.514: INFO: Initial restart count of pod busybox-ec27e593-e50d-4a32-b62a-0ac2159dc5ed is 0
Feb 12 15:03:51.937: INFO: Restart count of pod container-probe-8837/busybox-ec27e593-e50d-4a32-b62a-0ac2159dc5ed is now 1 (54.42319838s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:03:51.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8837" for this suite.
Feb 12 15:03:58.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:03:58.312: INFO: namespace container-probe-8837 deletion completed in 6.232717123s

• [SLOW TEST:73.006 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:03:58.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 12 15:03:58.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3" in namespace "projected-5243" to be "success or failure"
Feb 12 15:03:58.588: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.183717ms
Feb 12 15:04:00.599: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035628735s
Feb 12 15:04:03.049: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48590178s
Feb 12 15:04:05.057: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493593452s
Feb 12 15:04:07.066: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502691996s
Feb 12 15:04:09.117: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.553431368s
STEP: Saw pod success
Feb 12 15:04:09.117: INFO: Pod "downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3" satisfied condition "success or failure"
Feb 12 15:04:09.121: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3 container client-container: 
STEP: delete the pod
Feb 12 15:04:09.282: INFO: Waiting for pod downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3 to disappear
Feb 12 15:04:09.287: INFO: Pod downwardapi-volume-ada7fbdc-39ec-49d1-a1f5-2c08ec8102c3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:04:09.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5243" for this suite.
Feb 12 15:04:15.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:04:15.469: INFO: namespace projected-5243 deletion completed in 6.175280297s

• [SLOW TEST:17.155 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:04:15.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 12 15:04:25.665: INFO: Pod pod-hostip-737b6a59-bcbe-4586-be83-67a069801b25 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:04:25.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5004" for this suite.
Feb 12 15:04:47.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:04:47.902: INFO: namespace pods-5004 deletion completed in 22.23103955s

• [SLOW TEST:32.433 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:04:47.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 12 15:04:48.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 12 15:04:48.177: INFO: stderr: ""
Feb 12 15:04:48.177: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:04:48.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2188" for this suite.
Feb 12 15:04:54.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:04:54.342: INFO: namespace kubectl-2188 deletion completed in 6.15883151s

• [SLOW TEST:6.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:04:54.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 12 15:07:57.718: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:07:57.799: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:07:59.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:07:59.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:01.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:01.822: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:03.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:03.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:05.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:05.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:07.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:07.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:09.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:09.816: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:11.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:11.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:13.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:13.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:15.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:15.857: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:17.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:17.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:19.800: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:19.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:21.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:21.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:23.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:23.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:25.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:25.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:27.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:27.816: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:29.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:29.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:31.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:31.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:33.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:33.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:35.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:36.130: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:37.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:37.821: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:39.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:39.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:41.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:41.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:43.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:43.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:45.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:45.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:47.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:47.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:49.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:49.814: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:51.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:51.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:53.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:53.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:55.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:55.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:57.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:57.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:08:59.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:08:59.827: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:01.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:01.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:03.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:03.815: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:05.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:05.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:07.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:07.805: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:09.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:09.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:11.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:11.815: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:13.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:13.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:15.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:15.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:17.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:17.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:19.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:19.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:21.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:21.814: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:23.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:23.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:25.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:25.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:27.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:27.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:29.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:29.815: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:31.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:31.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:33.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:33.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:35.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:35.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:37.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:37.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:39.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:39.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:41.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:41.811: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:43.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:43.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:45.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:45.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 15:09:47.799: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 15:09:47.810: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:09:47.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-490" for this suite.
Feb 12 15:10:09.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:10:09.943: INFO: namespace container-lifecycle-hook-490 deletion completed in 22.125780515s

• [SLOW TEST:315.600 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:10:09.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:10:16.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9043" for this suite.
Feb 12 15:10:22.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:10:22.503: INFO: namespace namespaces-9043 deletion completed in 6.187085233s
STEP: Destroying namespace "nsdeletetest-9259" for this suite.
Feb 12 15:10:22.506: INFO: Namespace nsdeletetest-9259 was already deleted
STEP: Destroying namespace "nsdeletetest-2730" for this suite.
Feb 12 15:10:28.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:10:28.740: INFO: namespace nsdeletetest-2730 deletion completed in 6.233732779s

• [SLOW TEST:18.798 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:10:28.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2635/configmap-test-3a6172c8-df8c-4f09-8074-55af8840948f
STEP: Creating a pod to test consume configMaps
Feb 12 15:10:28.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164" in namespace "configmap-2635" to be "success or failure"
Feb 12 15:10:28.906: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265887ms
Feb 12 15:10:30.915: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016671875s
Feb 12 15:10:32.925: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027215082s
Feb 12 15:10:34.932: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034161362s
Feb 12 15:10:36.937: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039557713s
Feb 12 15:10:38.963: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065303973s
STEP: Saw pod success
Feb 12 15:10:38.963: INFO: Pod "pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164" satisfied condition "success or failure"
Feb 12 15:10:38.966: INFO: Trying to get logs from node iruya-node pod pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164 container env-test: 
STEP: delete the pod
Feb 12 15:10:39.013: INFO: Waiting for pod pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164 to disappear
Feb 12 15:10:39.039: INFO: Pod pod-configmaps-77a664b5-97a6-42d9-8541-2c0da849e164 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:10:39.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2635" for this suite.
Feb 12 15:10:45.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:10:45.253: INFO: namespace configmap-2635 deletion completed in 6.209392072s

• [SLOW TEST:16.513 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:10:45.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 12 15:10:45.343: INFO: Waiting up to 5m0s for pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7" in namespace "emptydir-2991" to be "success or failure"
Feb 12 15:10:45.348: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341484ms
Feb 12 15:10:47.361: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017561914s
Feb 12 15:10:49.376: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032267336s
Feb 12 15:10:51.387: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043709556s
Feb 12 15:10:53.396: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052064545s
Feb 12 15:10:55.404: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060403383s
STEP: Saw pod success
Feb 12 15:10:55.404: INFO: Pod "pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7" satisfied condition "success or failure"
Feb 12 15:10:55.409: INFO: Trying to get logs from node iruya-node pod pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7 container test-container: 
STEP: delete the pod
Feb 12 15:10:55.496: INFO: Waiting for pod pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7 to disappear
Feb 12 15:10:55.501: INFO: Pod pod-cfc3f0b5-6cdc-46a6-b69d-507a6fcaa8a7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:10:55.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2991" for this suite.
Feb 12 15:11:01.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:11:01.726: INFO: namespace emptydir-2991 deletion completed in 6.218668918s

• [SLOW TEST:16.473 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:11:01.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 15:11:01.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:11:12.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3845" for this suite.
Feb 12 15:11:58.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:11:58.212: INFO: namespace pods-3845 deletion completed in 46.122889307s

• [SLOW TEST:56.484 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:11:58.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6586
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6586
STEP: Deleting pre-stop pod
Feb 12 15:12:25.442: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:12:25.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6586" for this suite.
Feb 12 15:13:09.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:13:09.558: INFO: namespace prestop-6586 deletion completed in 44.086930947s

• [SLOW TEST:71.346 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:13:09.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 15:13:33.720: INFO: Container started at 2020-02-12 15:13:17 +0000 UTC, pod became ready at 2020-02-12 15:13:32 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:13:33.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7615" for this suite.
Feb 12 15:13:57.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:13:57.908: INFO: namespace container-probe-7615 deletion completed in 24.18047113s

• [SLOW TEST:48.350 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:13:57.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 12 15:13:58.045: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 15:13:58.051: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 15:13:58.053: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 12 15:13:58.063: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 12 15:13:58.063: INFO: 	Container weave ready: true, restart count 0
Feb 12 15:13:58.063: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 15:13:58.063: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.063: INFO: 	Container kube-bench ready: false, restart count 0
Feb 12 15:13:58.063: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.063: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 15:13:58.063: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 12 15:13:58.072: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 12 15:13:58.072: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 12 15:13:58.072: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container coredns ready: true, restart count 0
Feb 12 15:13:58.072: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container coredns ready: true, restart count 0
Feb 12 15:13:58.072: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container etcd ready: true, restart count 0
Feb 12 15:13:58.072: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container weave ready: true, restart count 0
Feb 12 15:13:58.072: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 15:13:58.072: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 12 15:13:58.072: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 12 15:13:58.072: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 12 15:13:58.142: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 12 15:13:58.142: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813.15f2b0759569c6da], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1374/filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813.15f2b076c9a388ce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813.15f2b077e0e8f4ae], Reason = [Created], Message = [Created container filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813.15f2b07805611bc9], Reason = [Started], Message = [Started container filler-pod-0a1cdc68-4db0-4920-960e-0409aa3b4813]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2.15f2b07594604129], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1374/filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2.15f2b076fdaddc90], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2.15f2b07819a07844], Reason = [Created], Message = [Created container filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2.15f2b07841cc07b0], Reason = [Started], Message = [Started container filler-pod-953bff0c-7328-45e6-90cd-31d85ed05fd2]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f2b078d9af0c64], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:14:13.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1374" for this suite.
Feb 12 15:14:19.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:14:19.556: INFO: namespace sched-pred-1374 deletion completed in 6.192986954s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.647 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:14:19.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-7b370bad-9e64-4cf3-bee1-ae9df8195b58
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-7b370bad-9e64-4cf3-bee1-ae9df8195b58
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:14:37.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1981" for this suite.
Feb 12 15:14:59.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:14:59.360: INFO: namespace projected-1981 deletion completed in 22.278700221s

• [SLOW TEST:39.804 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:14:59.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 12 15:14:59.447: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:15:15.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2416" for this suite.
Feb 12 15:15:21.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:15:21.890: INFO: namespace init-container-2416 deletion completed in 6.255091105s

• [SLOW TEST:22.529 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:15:21.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2c9526e4-b5af-44cd-a1e8-8e4d5dc77e2f
STEP: Creating a pod to test consume configMaps
Feb 12 15:15:22.015: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75" in namespace "projected-3975" to be "success or failure"
Feb 12 15:15:22.020: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344567ms
Feb 12 15:15:24.035: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019398285s
Feb 12 15:15:26.042: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025966861s
Feb 12 15:15:28.049: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033632731s
Feb 12 15:15:30.703: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687239804s
Feb 12 15:15:32.710: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.694122433s
STEP: Saw pod success
Feb 12 15:15:32.710: INFO: Pod "pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75" satisfied condition "success or failure"
Feb 12 15:15:32.713: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 15:15:32.787: INFO: Waiting for pod pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75 to disappear
Feb 12 15:15:32.792: INFO: Pod pod-projected-configmaps-40f2a3bd-23f8-49ba-bcf7-f46f24bb7b75 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:15:32.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3975" for this suite.
Feb 12 15:15:38.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:15:38.968: INFO: namespace projected-3975 deletion completed in 6.171506877s

• [SLOW TEST:17.078 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:15:38.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 12 15:15:39.083: INFO: Waiting up to 5m0s for pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7" in namespace "emptydir-8907" to be "success or failure"
Feb 12 15:15:39.104: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.498507ms
Feb 12 15:15:41.111: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027085513s
Feb 12 15:15:43.124: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040500654s
Feb 12 15:15:45.134: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050101387s
Feb 12 15:15:47.164: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080420913s
Feb 12 15:15:49.175: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091487912s
STEP: Saw pod success
Feb 12 15:15:49.175: INFO: Pod "pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7" satisfied condition "success or failure"
Feb 12 15:15:49.179: INFO: Trying to get logs from node iruya-node pod pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7 container test-container: 
STEP: delete the pod
Feb 12 15:15:49.235: INFO: Waiting for pod pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7 to disappear
Feb 12 15:15:49.244: INFO: Pod pod-89ae975a-9ac5-48c3-a6a0-936936aba6a7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:15:49.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8907" for this suite.
Feb 12 15:15:55.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:15:55.460: INFO: namespace emptydir-8907 deletion completed in 6.209608509s

• [SLOW TEST:16.491 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:15:55.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 12 15:15:55.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6507'
Feb 12 15:15:57.673: INFO: stderr: ""
Feb 12 15:15:57.673: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 12 15:15:57.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6507'
Feb 12 15:15:58.469: INFO: stderr: ""
Feb 12 15:15:58.469: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 12 15:15:59.476: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:15:59.476: INFO: Found 0 / 1
Feb 12 15:16:00.484: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:00.484: INFO: Found 0 / 1
Feb 12 15:16:01.730: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:01.730: INFO: Found 0 / 1
Feb 12 15:16:02.482: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:02.482: INFO: Found 0 / 1
Feb 12 15:16:03.482: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:03.483: INFO: Found 0 / 1
Feb 12 15:16:04.494: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:04.494: INFO: Found 0 / 1
Feb 12 15:16:05.477: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:05.477: INFO: Found 1 / 1
Feb 12 15:16:05.477: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 12 15:16:05.481: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 15:16:05.481: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 12 15:16:05.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-tr8nv --namespace=kubectl-6507'
Feb 12 15:16:05.623: INFO: stderr: ""
Feb 12 15:16:05.623: INFO: stdout: "Name:           redis-master-tr8nv\nNamespace:      kubectl-6507\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Wed, 12 Feb 2020 15:15:57 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://6ad638bc84af67661ab70c8da024a50384851a58b16404f9a2bf0561686b404f\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 12 Feb 2020 15:16:04 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlfkn (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-dlfkn:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-dlfkn\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-6507/redis-master-tr8nv to iruya-node\n  Normal  Pulled     3s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb 12 15:16:05.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6507'
Feb 12 15:16:05.841: INFO: stderr: ""
Feb 12 15:16:05.841: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-6507\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-tr8nv\n"
Feb 12 15:16:05.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6507'
Feb 12 15:16:06.055: INFO: stderr: ""
Feb 12 15:16:06.055: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-6507\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.15.85\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 12 15:16:06.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 12 15:16:06.191: INFO: stderr: ""
Feb 12 15:16:06.191: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 12 Feb 2020 15:15:16 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 12 Feb 2020 15:15:16 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 12 Feb 2020 15:15:16 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 12 Feb 2020 15:15:16 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         192d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         123d\n  kubectl-6507               redis-master-tr8nv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 12 15:16:06.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6507'
Feb 12 15:16:06.288: INFO: stderr: ""
Feb 12 15:16:06.288: INFO: stdout: "Name:         kubectl-6507\nLabels:       e2e-framework=kubectl\n              e2e-run=f0b01e30-7752-4010-bc41-0bee554ca11a\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:16:06.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6507" for this suite.
Feb 12 15:16:28.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:16:28.460: INFO: namespace kubectl-6507 deletion completed in 22.168007679s

• [SLOW TEST:33.000 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:16:28.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-85310a8e-6aef-403e-a449-a2ce983394cd
STEP: Creating a pod to test consume secrets
Feb 12 15:16:28.582: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39" in namespace "projected-1508" to be "success or failure"
Feb 12 15:16:28.598: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39": Phase="Pending", Reason="", readiness=false. Elapsed: 15.655104ms
Feb 12 15:16:30.609: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026826459s
Feb 12 15:16:33.279: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.696231706s
Feb 12 15:16:35.289: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.70603386s
Feb 12 15:16:37.320: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73734146s
Feb 12 15:16:39.326: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.743616566s
STEP: Saw pod success
Feb 12 15:16:39.326: INFO: Pod "pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39" satisfied condition "success or failure"
Feb 12 15:16:39.330: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 15:16:39.516: INFO: Waiting for pod pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39 to disappear
Feb 12 15:16:39.523: INFO: Pod pod-projected-secrets-e2bb1d4b-1959-4ace-a1b1-567a171aab39 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:16:39.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1508" for this suite.
Feb 12 15:16:45.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:16:45.809: INFO: namespace projected-1508 deletion completed in 6.279196522s

• [SLOW TEST:17.349 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:16:45.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:17:18.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5288" for this suite.
Feb 12 15:17:24.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:17:24.347: INFO: namespace namespaces-5288 deletion completed in 6.141842079s
STEP: Destroying namespace "nsdeletetest-6887" for this suite.
Feb 12 15:17:24.350: INFO: Namespace nsdeletetest-6887 was already deleted
STEP: Destroying namespace "nsdeletetest-7083" for this suite.
Feb 12 15:17:30.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:17:30.477: INFO: namespace nsdeletetest-7083 deletion completed in 6.126164505s

• [SLOW TEST:44.668 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 12 15:17:30.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 12 15:17:40.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6112" for this suite.
Feb 12 15:17:46.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 15:17:46.934: INFO: namespace emptydir-wrapper-6112 deletion completed in 6.226350938s

• [SLOW TEST:16.456 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSFeb 12 15:17:46.934: INFO: Running AfterSuite actions on all nodes
Feb 12 15:17:46.934: INFO: Running AfterSuite actions on node 1
Feb 12 15:17:46.934: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8499.643 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS